Feb 16 20:56:21 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 20:56:21 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 20:56:22 crc kubenswrapper[4811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:22 crc kubenswrapper[4811]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 20:56:22 crc kubenswrapper[4811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:22 crc kubenswrapper[4811]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:22 crc kubenswrapper[4811]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 20:56:22 crc kubenswrapper[4811]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.470568 4811 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.479961 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.479991 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480020 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480028 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480036 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480044 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480051 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480060 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480068 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480077 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480085 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480092 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480103 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480113 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480122 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480131 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480139 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480148 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480156 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480164 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480171 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480179 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480187 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480236 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480245 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480252 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480260 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480267 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480275 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480282 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480290 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480298 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480306 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480316 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480323 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480331 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480339 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480346 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480354 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480361 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480374 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480384 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480393 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480401 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480410 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480418 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480427 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480435 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480443 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480452 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480459 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480467 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480475 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480486 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480496 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480504 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480512 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480520 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480528 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480537 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480547 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480556 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480565 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480573 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480581 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480589 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480600 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480607 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480615 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480622 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.480630 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480770 4811 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480785 4811 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480798 4811 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480810 4811 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480822 4811 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480832 4811 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480843 4811 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480854 4811 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480863 4811 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480873 4811 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480883 4811 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480892 4811 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480901 4811 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480910 4811 flags.go:64] FLAG: --cgroup-root="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480919 4811 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480928 4811 flags.go:64] FLAG: --client-ca-file="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480938 4811 flags.go:64] FLAG: --cloud-config="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480947 4811 flags.go:64] FLAG: --cloud-provider="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480955 4811 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480967 4811 flags.go:64] FLAG: --cluster-domain="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480976 4811 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480986 4811 flags.go:64] FLAG: --config-dir="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.480995 4811 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481004 4811 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481015 4811 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481025 4811 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481034 4811 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481043 4811 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481052 4811 flags.go:64] FLAG: --contention-profiling="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481061 4811 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481070 4811 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481080 4811 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481089 4811 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481099 4811 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481108 4811 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481118 4811 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481127 4811 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481137 4811 flags.go:64] FLAG: --enable-server="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481146 4811 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481156 4811 flags.go:64] FLAG: --event-burst="100" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481166 4811 flags.go:64] FLAG: --event-qps="50" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481176 4811 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481186 4811 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481219 4811 flags.go:64] FLAG: --eviction-hard="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481230 4811 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481239 4811 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481249 4811 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481258 4811 flags.go:64] FLAG: --eviction-soft="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481267 4811 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481276 4811 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481285 4811 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481294 4811 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481303 4811 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481312 4811 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481320 4811 flags.go:64] FLAG: --feature-gates="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481331 4811 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481340 4811 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481350 4811 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481359 4811 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481368 4811 flags.go:64] FLAG: --healthz-port="10248" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481377 4811 flags.go:64] FLAG: --help="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481387 4811 flags.go:64] FLAG: --hostname-override="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481395 4811 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481405 4811 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481414 4811 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481423 4811 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481433 4811 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481442 4811 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481451 4811 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481460 4811 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481469 4811 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481478 4811 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481488 4811 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481497 4811 flags.go:64] FLAG: --kube-reserved="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481506 4811 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481514 4811 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481523 4811 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481533 4811 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481542 4811 flags.go:64] FLAG: --lock-file="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481551 4811 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481560 4811 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481569 4811 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481582 4811 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481597 4811 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481606 4811 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481615 4811 flags.go:64] FLAG: --logging-format="text" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481624 4811 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481634 4811 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481642 4811 flags.go:64] FLAG: --manifest-url="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481651 4811 flags.go:64] FLAG: --manifest-url-header="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481662 4811 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481671 4811 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481682 4811 flags.go:64] FLAG: --max-pods="110" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481691 4811 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481733 4811 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481742 4811 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481751 4811 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481760 4811 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481770 4811 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481780 4811 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481803 4811 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481812 4811 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481821 4811 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481831 4811 flags.go:64] FLAG: --pod-cidr="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481839 4811 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481852 4811 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481861 4811 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481870 4811 flags.go:64] FLAG: --pods-per-core="0" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481879 4811 flags.go:64] FLAG: --port="10250" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481889 4811 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481897 4811 flags.go:64] FLAG: --provider-id="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481906 4811 flags.go:64] FLAG: --qos-reserved="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481915 4811 flags.go:64] FLAG: --read-only-port="10255" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481925 4811 flags.go:64] FLAG: --register-node="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481934 4811 flags.go:64] FLAG: --register-schedulable="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481946 4811 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481966 4811 flags.go:64] FLAG: --registry-burst="10" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481975 4811 flags.go:64] FLAG: --registry-qps="5" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481984 4811 flags.go:64] FLAG: --reserved-cpus="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.481992 4811 flags.go:64] FLAG: --reserved-memory="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482002 4811 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482011 4811 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482020 4811 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482030 4811 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482038 4811 flags.go:64] FLAG: --runonce="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482047 4811 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482057 4811 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482072 4811 flags.go:64] FLAG: --seccomp-default="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482081 4811 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482090 4811 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482100 4811 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482109 4811 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482118 4811 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482130 4811 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482139 4811 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482148 4811 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482156 4811 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482165 4811 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482175 4811 flags.go:64] FLAG: --system-cgroups="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482184 4811 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482221 4811 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482230 4811 flags.go:64] FLAG: --tls-cert-file="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482239 4811 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482251 4811 flags.go:64] FLAG: --tls-min-version="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482260 4811 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482269 4811 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482277 4811 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482289 4811 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482298 4811 flags.go:64] FLAG: --v="2" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482311 4811 flags.go:64] FLAG: --version="false" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482322 4811 flags.go:64] FLAG: --vmodule="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482332 4811 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.482342 4811 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482548 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482560 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482569 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482579 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482587 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482595 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482603 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482611 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482619 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482626 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482634 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482642 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482653 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482661 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482669 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482676 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482684 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482692 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482700 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482707 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482716 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482723 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482731 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482739 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482747 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482754 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482763 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482770 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482778 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482786 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482793 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482801 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482809 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482818 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482825 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482833 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482841 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482848 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482857 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482865 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482873 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482881 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482889 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482897 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482910 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482918 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482925 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482933 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482940 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482949 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482959 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482969 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482978 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482986 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.482994 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483002 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483010 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483020 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483029 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483040 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483049 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483057 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483065 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483073 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483084 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483093 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483101 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483109 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483125 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483133 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.483141 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.484186 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.495410 4811 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.495486 4811 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495634 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495663 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495680 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495696 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495707 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495718 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495729 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495739 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495748 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495758 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495769 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495780 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495789 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495797 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495805 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495815 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495824 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495832 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495840 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495847 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495855 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495864 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495871 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495879 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495888 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495896 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495904 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495912 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495920 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495927 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495935 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495946 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495955 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495964 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495972 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495981 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495989 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.495998 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496007 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496016 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496024 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496032 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496040 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496048 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496056 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496064 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496071 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496080 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496087 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496095 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496103 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496110 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496118 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496126 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496134 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496141 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496149 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496156 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496164 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496172 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496183 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496192 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496227 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496235 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496243 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496252 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496260 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496267 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496278 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496287 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496296 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.496309 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496542 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496557 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496566 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496575 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496582 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496590 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496599 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496607 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496615 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496622 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496630 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496642 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496652 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496662 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496671 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496679 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496688 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496696 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496704 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496712 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496720 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496728 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496736 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496745 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496753 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496761 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496769 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496780 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496790 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496799 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496807 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496815 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496824 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496833 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496841 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496849 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496857 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496868 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496876 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496884 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496893 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496902 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496910 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496918 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496925 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496934 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496944 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496956 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496967 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496978 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.496989 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497002 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497013 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497023 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497033 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497043 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497052 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497063 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497074 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497083 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497093 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497102 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497113 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497123 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497133 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497143 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497152 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497163 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497175 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497185 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.497217 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.497231 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.497495 4811 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.503526 4811 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.503675 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.505808 4811 server.go:997] "Starting client certificate rotation" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.505865 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.506884 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-04 10:09:36.431206144 +0000 UTC Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.506969 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.531469 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.535005 4811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.536935 4811 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.549505 4811 log.go:25] "Validated CRI v1 runtime API" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.583342 4811 log.go:25] "Validated CRI v1 image API" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.585344 4811 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.591686 4811 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-20-52-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.591735 4811 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.619346 4811 manager.go:217] Machine: {Timestamp:2026-02-16 20:56:22.616543382 +0000 UTC m=+0.545839350 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:529dfd1c-acac-4f44-8431-0dae7052f19c BootID:87f61b05-d276-4909-a6aa-85b13eb068a7 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8f:ed:b7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8f:ed:b7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f4:bc:60 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:56:e8:fe Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9f:42:e9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7a:8f:a8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:89:6f:6a:72:f4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:94:27:84:94:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.619596 4811 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.619838 4811 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.620133 4811 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.620314 4811 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.620362 4811 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.620585 4811 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.620597 4811 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.621101 4811 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.621141 4811 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.622014 4811 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.622470 4811 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.625529 4811 kubelet.go:418] "Attempting to sync node with API server" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.625555 4811 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.625587 4811 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.625602 4811 kubelet.go:324] "Adding apiserver pod source" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.625613 4811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.629370 4811 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.630549 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.632072 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.632119 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.632285 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.632318 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.633146 4811 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635009 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635033 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635041 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635048 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635060 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635067 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635074 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635084 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635093 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635103 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635114 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635124 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.635987 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.636456 4811 server.go:1280] "Started kubelet" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.637590 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.637928 4811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.637929 4811 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 20:56:22 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.638548 4811 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.639697 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.639768 4811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.639848 4811 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.639868 4811 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.639847 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:11:58.362066934 +0000 UTC Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.639953 4811 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.640028 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.641240 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.641375 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.641662 4811 factory.go:55] Registering systemd factory Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.641737 4811 factory.go:221] Registration of the systemd container factory successfully Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.642392 4811 server.go:460] "Adding debug handlers to kubelet server" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.646287 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.646711 4811 factory.go:153] Registering CRI-O factory Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.647529 4811 factory.go:221] Registration of the crio container factory successfully Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.647695 4811 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.647761 4811 factory.go:103] Registering Raw factory Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.647802 4811 manager.go:1196] Started watching for new ooms in manager Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.648090 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894d5936d703868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 20:56:22.636427368 +0000 UTC m=+0.565723306,LastTimestamp:2026-02-16 20:56:22.636427368 +0000 UTC m=+0.565723306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.650338 4811 manager.go:319] Starting recovery of all containers Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656614 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656691 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656715 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656736 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656757 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656778 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656796 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656816 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656838 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656856 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656878 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656900 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656918 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656944 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656963 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.656987 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657017 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657042 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657069 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657164 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657190 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657276 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657303 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657350 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657379 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657406 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657439 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657467 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657496 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657525 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657553 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657672 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657708 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657737 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657766 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657798 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657827 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657854 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657883 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657912 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657938 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.657965 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658066 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658093 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658121 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658152 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658181 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658246 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658276 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658303 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658328 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658355 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658432 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658466 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658507 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658536 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658564 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658592 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658618 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658642 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658669 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658698 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658721 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658749 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658774 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658797 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658822 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658845 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658875 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658898 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658925 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658949 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658972 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.658998 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659025 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659049 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659076 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659104 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659132 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659160 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659187 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659250 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659276 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659303 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659328 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659357 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659385 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659411 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659436 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659462 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659488 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659518 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659545 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659570 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659594 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659622 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659646 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659669 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659695 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659720 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659742 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659767 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659793 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659819 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659860 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659892 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659921 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659949 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.659987 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660016 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660044 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660190 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660254 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660315 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660342 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660367 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660390 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660415 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660437 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660461 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660485 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660511 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660538 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660562 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660586 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660609 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660634 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660664 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660702 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660732 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660757 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660783 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660810 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660836 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660863 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660888 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660913 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660938 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660962 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.660991 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661019 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661044 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661070 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661095 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661114 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661133 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661152 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661170 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661189 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661255 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661282 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661306 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661333 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661459 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661489 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661516 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661543 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661570 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661594 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661656 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661684 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661711 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661737 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661762 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661786 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661812 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661838 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661866 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661892 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661917 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661946 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661972 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.661996 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.662020 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664178 4811 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664283 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664319 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664346 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664372 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664397 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664423 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664453 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664479 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664504 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664532 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664562 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664586 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664613 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664638 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664666 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664691 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664720 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664746 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664770 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664819 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664842 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664866 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664895 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664919 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664944 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664971 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.664998 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665023 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665049 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665073 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665098 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665122 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665150 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665172 4811 reconstruct.go:97] "Volume reconstruction finished" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.665189 4811 reconciler.go:26] "Reconciler: start to sync state" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.670107 4811 manager.go:324] Recovery completed Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.678086 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.682374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.683833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.683869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.686812 4811 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.686838 4811 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.686911 4811 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.698176 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.701495 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.701558 4811 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.701627 4811 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.701706 4811 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 20:56:22 crc kubenswrapper[4811]: W0216 20:56:22.703700 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.703766 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.716454 4811 policy_none.go:49] "None policy: Start" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.718821 4811 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.718865 4811 state_mem.go:35] "Initializing new in-memory state store" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.740649 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.784564 4811 manager.go:334] "Starting Device Plugin manager" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.784621 4811 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.784637 4811 server.go:79] "Starting device plugin registration server" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.785099 4811 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.785118 4811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.785322 4811 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.785411 4811 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.785421 4811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.793314 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.802557 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.802646 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.804046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.804090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.804101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.804280 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.804622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.804678 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805251 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805397 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805426 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.805777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806617 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.806804 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807495 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807590 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807616 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.807680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808860 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.808880 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.810935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.810954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.810963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.847387 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.867982 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868017 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868038 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868058 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868099 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868142 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868160 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868175 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868214 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868232 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868246 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868287 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.868302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.886224 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.887057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.887114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.887137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.887176 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:22 crc kubenswrapper[4811]: E0216 20:56:22.887786 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969100 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969240 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969277 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969303 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969390 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969382 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969407 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969543 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969543 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969577 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969589 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969631 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969636 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969723 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969764 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969790 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969809 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:22 crc kubenswrapper[4811]: I0216 20:56:22.969887 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.088112 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.089702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.089778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.089795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.089838 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:23 crc kubenswrapper[4811]: E0216 20:56:23.090600 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.161547 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.177327 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.195785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4811]: W0216 20:56:23.216309 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b8db9113ec5fa085245e80f09cc05e5657ab16f4bb11968973a7e9eff5fdca2f WatchSource:0}: Error finding container b8db9113ec5fa085245e80f09cc05e5657ab16f4bb11968973a7e9eff5fdca2f: Status 404 returned error can't find the container with id b8db9113ec5fa085245e80f09cc05e5657ab16f4bb11968973a7e9eff5fdca2f Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.216549 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4811]: W0216 20:56:23.219941 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-af2a149cc9123942b608a4d2534985913cce95f2a6fc29c2ff0f400696a8eed2 WatchSource:0}: Error finding container af2a149cc9123942b608a4d2534985913cce95f2a6fc29c2ff0f400696a8eed2: Status 404 returned error can't find the container with id af2a149cc9123942b608a4d2534985913cce95f2a6fc29c2ff0f400696a8eed2 Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.225405 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4811]: W0216 20:56:23.234232 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7cdbf575ec4347c11f21e6c7780ec29c4eed7537b6b6b3660792e8338bdb2394 WatchSource:0}: Error finding container 7cdbf575ec4347c11f21e6c7780ec29c4eed7537b6b6b3660792e8338bdb2394: Status 404 returned error can't find the container with id 7cdbf575ec4347c11f21e6c7780ec29c4eed7537b6b6b3660792e8338bdb2394 Feb 16 20:56:23 crc kubenswrapper[4811]: E0216 20:56:23.248918 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Feb 16 20:56:23 crc kubenswrapper[4811]: W0216 20:56:23.482465 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4811]: E0216 20:56:23.482597 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.490948 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.493266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.493321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.493336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.493379 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:23 crc kubenswrapper[4811]: E0216 20:56:23.494054 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.638563 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.640571 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:27:28.313223136 +0000 UTC Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.712401 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7cdbf575ec4347c11f21e6c7780ec29c4eed7537b6b6b3660792e8338bdb2394"} Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.713470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af2a149cc9123942b608a4d2534985913cce95f2a6fc29c2ff0f400696a8eed2"} Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.714654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b8db9113ec5fa085245e80f09cc05e5657ab16f4bb11968973a7e9eff5fdca2f"} Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.715876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4486194a3c4c40bc6c99d4f38ead5749ba711a680bd2ab05b1e5b9e0cba50b0f"} Feb 16 20:56:23 crc kubenswrapper[4811]: I0216 20:56:23.716958 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7da4a31395e305caa6609b7f3a9d9778e8b787e02d9be9753f9d47cd96f4cf8c"} Feb 16 20:56:23 crc kubenswrapper[4811]: W0216 20:56:23.822049 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4811]: E0216 20:56:23.822126 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4811]: W0216 20:56:23.837234 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4811]: E0216 20:56:23.837320 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4811]: E0216 20:56:24.050016 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Feb 16 20:56:24 crc kubenswrapper[4811]: W0216 20:56:24.248083 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4811]: E0216 20:56:24.248185 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.294920 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.296294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.296328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.296336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.296358 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:24 crc kubenswrapper[4811]: E0216 20:56:24.296879 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.638411 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.640682 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:20:16.846176106 +0000 UTC Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.668740 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:56:24 crc kubenswrapper[4811]: E0216 20:56:24.670018 4811 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.720734 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9" exitCode=0 Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.720781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.720834 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.721723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.721765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.721779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.722629 4811 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="29c0f57d191bf3e315467166fa2ad14c9add128291cc79cdd05c0c2f40c9f167" exitCode=0 Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.722683 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"29c0f57d191bf3e315467166fa2ad14c9add128291cc79cdd05c0c2f40c9f167"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.722685 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.723309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.723334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.723348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.725079 4811 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d" exitCode=0 Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.725147 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.725146 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.725992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.726017 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.726030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.727865 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.727899 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.727915 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.727929 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.727876 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.728497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.728528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.728541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.729136 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e" exitCode=0 Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.729170 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e"} Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.729354 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.730406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.730436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.730451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.732187 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.734432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.734484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4811]: I0216 20:56:24.734500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.299962 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.638973 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.640976 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:31:35.828841695 +0000 UTC Feb 16 20:56:25 crc kubenswrapper[4811]: E0216 20:56:25.650435 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Feb 16 20:56:25 crc kubenswrapper[4811]: W0216 20:56:25.716304 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:25 crc kubenswrapper[4811]: E0216 20:56:25.716396 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.737988 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.738033 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.738045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.738056 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.742586 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637" exitCode=0 Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.742655 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.742756 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.746057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.746116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.746137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.746989 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.746994 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b94a3750a50b4ec77d812e54702f5419af37a45dc21a30eaf918dbe789da0651"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.748048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.748091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.748101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.750473 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.750499 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.750512 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2"} Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.750563 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.750622 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.751471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.751514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.751530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.751715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.751764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.751781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4811]: W0216 20:56:25.816746 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:25 crc kubenswrapper[4811]: E0216 20:56:25.816841 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:25 crc kubenswrapper[4811]: W0216 20:56:25.867950 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 16 20:56:25 crc kubenswrapper[4811]: E0216 20:56:25.868060 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.897814 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.898932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.898978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.898994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4811]: I0216 20:56:25.899019 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:25 crc kubenswrapper[4811]: E0216 20:56:25.899532 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.150717 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.158191 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.641660 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:26:23.605300019 +0000 UTC Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.756068 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe" exitCode=0 Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.756284 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe"} Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.756342 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.757639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.757684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.757700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.761626 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80"} Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.761673 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.761731 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.761767 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.761733 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.761775 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4811]: I0216 20:56:26.763916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.240476 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.641992 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:40:58.5229643 +0000 UTC Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.735345 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.769715 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95"} Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.769805 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964"} Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.769832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb"} Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.769851 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf"} Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.769861 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.769981 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.770955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.770992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.771006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.771638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.771699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4811]: I0216 20:56:27.771721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.642305 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:48:35.365547256 +0000 UTC Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.778762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e"} Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.778862 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.778913 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.779200 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.780513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.780583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.780602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.780616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.780669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.780689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.782022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.782048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.782060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:28 crc kubenswrapper[4811]: I0216 20:56:28.788633 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.099666 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.101116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.101150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.101159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.101179 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.643454 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:43:19.82479813 +0000 UTC Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.781542 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.782974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.783051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:29 crc kubenswrapper[4811]: I0216 20:56:29.783086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.048730 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.048945 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.050598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.050896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.051034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.241356 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.241521 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.561972 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.562354 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.564259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.564318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.564356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4811]: I0216 20:56:30.643587 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:16:21.442440989 +0000 UTC Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.597333 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.597584 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.599074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.599404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.599465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.611270 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.611494 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.612982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.613084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.613107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:31 crc kubenswrapper[4811]: I0216 20:56:31.644435 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:09:15.234583343 +0000 UTC Feb 16 20:56:32 crc kubenswrapper[4811]: I0216 20:56:32.644815 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:15:51.532201381 +0000 UTC Feb 16 20:56:32 crc kubenswrapper[4811]: E0216 20:56:32.793630 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.625249 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.625567 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.628055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.628142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.628158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.630111 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.645444 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:30:03.527404981 +0000 UTC Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.794674 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.796078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.796154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.796176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.803971 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.804247 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.805523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.805583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:33 crc kubenswrapper[4811]: I0216 20:56:33.805603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:34 crc kubenswrapper[4811]: I0216 20:56:34.646334 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:48:51.266235195 +0000 UTC Feb 16 20:56:35 crc kubenswrapper[4811]: I0216 20:56:35.646563 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:19:23.825227157 +0000 UTC Feb 16 20:56:36 crc kubenswrapper[4811]: W0216 20:56:36.219118 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.219370 4811 trace.go:236] Trace[2024992542]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:26.216) (total time: 10002ms): Feb 16 20:56:36 crc kubenswrapper[4811]: Trace[2024992542]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (20:56:36.219) Feb 16 20:56:36 crc kubenswrapper[4811]: Trace[2024992542]: [10.002404838s] [10.002404838s] END Feb 16 20:56:36 crc kubenswrapper[4811]: E0216 20:56:36.219451 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.646913 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:44:58.063813746 +0000 UTC Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.649063 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.649124 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.656750 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.656825 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.803605 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.805725 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80" exitCode=255 Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.805770 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80"} Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.805916 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.806750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.806782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.806792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:36 crc kubenswrapper[4811]: I0216 20:56:36.807336 4811 scope.go:117] "RemoveContainer" containerID="fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80" Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.647751 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:35:52.669144038 +0000 UTC Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.809836 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.812066 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98"} Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.812324 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.813876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.813919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:37 crc kubenswrapper[4811]: I0216 20:56:37.813935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:38 crc kubenswrapper[4811]: I0216 20:56:38.648533 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:15:05.337502045 +0000 UTC Feb 16 20:56:39 crc kubenswrapper[4811]: I0216 20:56:39.650115 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:46:08.508755689 +0000 UTC Feb 16 20:56:39 crc kubenswrapper[4811]: I0216 20:56:39.856658 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.241990 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.242081 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.636747 4811 apiserver.go:52] "Watching apiserver" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.651002 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:02:16.874082574 +0000 UTC Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.666403 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.666945 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.667556 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.667727 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.667844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.667857 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:40 crc kubenswrapper[4811]: E0216 20:56:40.668116 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:40 crc kubenswrapper[4811]: E0216 20:56:40.668123 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.668786 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.668843 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:40 crc kubenswrapper[4811]: E0216 20:56:40.668857 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.670577 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.671316 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.672636 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.672930 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.673082 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.673110 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.672928 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.673004 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.673530 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.722120 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.735586 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.741588 4811 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.745390 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.759259 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.774545 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.788194 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:40 crc kubenswrapper[4811]: I0216 20:56:40.796968 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.619044 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.619661 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.625035 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.633241 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.637108 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.642317 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.643258 4811 trace.go:236] Trace[61231459]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:29.955) (total time: 11687ms): Feb 16 20:56:41 crc kubenswrapper[4811]: Trace[61231459]: ---"Objects listed" error: 11687ms (20:56:41.643) Feb 16 20:56:41 crc kubenswrapper[4811]: Trace[61231459]: [11.687887363s] [11.687887363s] END Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.643291 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.643941 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.644103 4811 trace.go:236] Trace[252946500]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:31.073) (total time: 10570ms): Feb 16 20:56:41 crc kubenswrapper[4811]: Trace[252946500]: ---"Objects listed" error: 10570ms (20:56:41.644) Feb 16 20:56:41 crc kubenswrapper[4811]: Trace[252946500]: [10.570447802s] [10.570447802s] END Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.644146 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.645229 4811 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.646145 4811 trace.go:236] Trace[1182571534]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:30.507) (total time: 11138ms): Feb 16 20:56:41 crc kubenswrapper[4811]: Trace[1182571534]: ---"Objects listed" error: 11138ms (20:56:41.645) Feb 16 20:56:41 crc kubenswrapper[4811]: Trace[1182571534]: [11.138338597s] [11.138338597s] END Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.646173 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.648349 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.651263 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:09:00.901290616 +0000 UTC Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.659818 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.673085 4811 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.675461 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.688976 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.700225 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.700792 4811 csr.go:261] certificate signing request csr-nzjvt is approved, waiting to be issued Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.701922 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.702126 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.714914 4811 csr.go:257] certificate signing request csr-nzjvt is issued Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.720409 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.741773 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746404 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746452 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746475 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746501 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746527 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746637 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746662 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746685 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746706 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746941 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.746938 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747099 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747164 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747258 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747287 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747543 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747644 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747599 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747671 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747718 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747741 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748760 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748816 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747842 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748844 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.747965 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748976 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748043 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748085 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748097 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748209 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748404 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748468 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.748976 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749033 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749230 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749244 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749282 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749314 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749342 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749367 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749394 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749419 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749430 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749504 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749533 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749537 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749589 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749618 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749643 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749668 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749695 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749723 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749747 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749777 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749802 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749905 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749931 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749955 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749980 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750030 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750053 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750081 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750106 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750131 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750158 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750186 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750236 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750264 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750288 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750313 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750337 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749539 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749901 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749936 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.749939 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750005 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750473 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750062 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750095 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750155 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750248 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750407 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750542 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750632 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750957 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.751129 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.751501 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.751511 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.751790 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.752056 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.753060 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.759309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.750460 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760030 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760184 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760277 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760350 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760387 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760416 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760590 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760623 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760650 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760673 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760701 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760728 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760750 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760771 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760795 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760810 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760820 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.760989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761067 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761096 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761101 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761171 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761220 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761245 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761269 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761322 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761390 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761415 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761441 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761464 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761474 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761489 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761516 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761521 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761536 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761672 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.758301 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761690 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761797 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761827 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761856 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761883 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761904 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761925 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761933 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.761964 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762031 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762055 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762080 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762104 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762154 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762179 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762229 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762237 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762323 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762355 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762470 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762531 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762688 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762725 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762906 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.762951 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763001 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763035 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763086 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763140 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763186 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763250 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763306 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763341 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763438 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763496 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763547 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763581 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763632 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.763664 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764111 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764179 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764242 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764292 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764333 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764382 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764413 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764419 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764481 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764512 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764543 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764594 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764619 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764639 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764663 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764685 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764704 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764729 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764751 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764772 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764793 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764834 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764854 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764875 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764897 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764919 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764937 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764960 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.764984 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765003 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765027 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765051 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765076 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765109 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765140 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765168 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765186 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765224 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765246 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765478 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765603 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.765998 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.766008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.766057 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.766429 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.766642 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.767135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.767172 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.768148 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.768316 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.767154 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.768943 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.769086 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770180 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770236 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770239 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770272 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770377 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770381 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.770418 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.771538 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772389 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.771562 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772443 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772312 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772769 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772806 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.773088 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.771833 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.771854 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.771952 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.771990 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772267 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.772408 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.773354 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.773438 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.773438 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.773484 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.774149 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.775009 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:42.274985474 +0000 UTC m=+20.204281412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.775383 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.775547 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.777277 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.777324 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.777357 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.777352 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.778219 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.778582 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.778621 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.778654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.778842 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.779232 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780077 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780515 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780600 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780635 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780687 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780863 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.780973 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781016 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781046 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781059 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781082 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781115 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781124 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781147 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781177 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781175 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781230 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781264 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781325 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781355 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781384 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781465 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781562 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781804 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781849 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781872 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781905 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781933 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.781977 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782010 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782040 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782076 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782194 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782267 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782297 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782483 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782642 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782670 4811 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782695 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782712 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782740 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782769 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782800 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782822 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782858 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782882 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782900 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782914 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782928 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782943 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782957 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782971 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784320 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.785609 4811 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.786944 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782587 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782640 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.782741 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783032 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783124 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783133 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783259 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.783291 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.792497 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:42.292470911 +0000 UTC m=+20.221766849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783375 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783712 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.783847 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784038 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784069 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784667 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784692 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784800 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784819 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.784912 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.792653 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:42.292645706 +0000 UTC m=+20.221941644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784927 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784992 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.785286 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.785331 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.785639 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.785960 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.786310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.787317 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.790244 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.790368 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.794094 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.794613 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.784287 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795742 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795770 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795786 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795799 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795820 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795834 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795849 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795861 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795875 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795898 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795912 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795927 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795944 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795957 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795971 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.795989 4811 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796021 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796034 4811 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796051 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796101 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796118 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796133 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796146 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796159 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796172 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796188 4811 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.796223 4811 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798722 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798759 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798773 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798786 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798804 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798814 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798829 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798839 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798849 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798862 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798872 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798884 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798894 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798905 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798915 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798929 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798939 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798949 4811 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798960 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798971 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798980 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.798990 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799001 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799016 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799026 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799061 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799070 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799082 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799092 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799102 4811 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799112 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799127 4811 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799138 4811 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799154 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799168 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799180 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799190 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799223 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799235 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799248 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799258 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799270 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799280 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799293 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799302 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799312 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799321 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799331 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799340 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.799339 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.799373 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.799388 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.799454 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:42.299433651 +0000 UTC m=+20.228729589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799351 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799495 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799507 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799518 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799537 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799548 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799559 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799570 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799582 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799593 4811 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799603 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799614 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799624 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799633 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799643 4811 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799652 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799646 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799662 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799740 4811 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799754 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799776 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799789 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799786 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799802 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.799855 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.803027 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.807990 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.808081 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.808140 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.808628 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.808973 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.809043 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.809306 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.811342 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.812677 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.816534 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.816559 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.816574 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.816630 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:42.316610941 +0000 UTC m=+20.245906879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.816897 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.818733 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.819228 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.819330 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.819677 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.819718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.819764 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.820027 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.820102 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.820398 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.820799 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.820889 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.820954 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.821541 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.821600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.821972 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.822754 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.822903 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.823051 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.823137 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.824924 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.827486 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.827843 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.828653 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.828759 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.829116 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.829117 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.829539 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.829785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.830098 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.830099 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.831167 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.831987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.832162 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.832369 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.832507 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.832548 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.833097 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: E0216 20:56:41.834294 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.840904 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.854283 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.857970 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.866406 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.867449 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.893372 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900659 4811 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900680 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900691 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900701 4811 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900712 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900722 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900733 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900741 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900750 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900760 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900771 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900780 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900789 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900798 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900807 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900815 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900825 4811 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900833 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900841 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900850 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900858 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900866 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900875 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900883 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900921 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900930 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900939 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900950 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900960 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900970 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900978 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900987 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.900998 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901010 4811 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901022 4811 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901033 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901045 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901055 4811 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901066 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901076 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901087 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901097 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901107 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901120 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901130 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901141 4811 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901150 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901158 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901167 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901176 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901185 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901194 4811 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901232 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901241 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901254 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901262 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901270 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901279 4811 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901288 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901297 4811 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901306 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901314 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901322 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901331 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901340 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901348 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901357 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901365 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901375 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901384 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901392 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901401 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901409 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901418 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901426 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901434 4811 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901442 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901489 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:41 crc kubenswrapper[4811]: I0216 20:56:41.901667 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:41 crc kubenswrapper[4811]: W0216 20:56:41.906293 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-dbf8e900d37b24a48c1338f0a1ddf6b2bb9dbc99b76e2abea268279737abf79f WatchSource:0}: Error finding container dbf8e900d37b24a48c1338f0a1ddf6b2bb9dbc99b76e2abea268279737abf79f: Status 404 returned error can't find the container with id dbf8e900d37b24a48c1338f0a1ddf6b2bb9dbc99b76e2abea268279737abf79f Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.185813 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:42 crc kubenswrapper[4811]: W0216 20:56:42.198975 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0343dd417f9fef1f249a4754b972c5b3b6fd748a7c760072880f6e8602d322ed WatchSource:0}: Error finding container 0343dd417f9fef1f249a4754b972c5b3b6fd748a7c760072880f6e8602d322ed: Status 404 returned error can't find the container with id 0343dd417f9fef1f249a4754b972c5b3b6fd748a7c760072880f6e8602d322ed Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.200322 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:42 crc kubenswrapper[4811]: W0216 20:56:42.214555 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9f7905ae12ae15e5ece646ff35484bc8d8dcf3b2e22af06ddb6105a22a4000a9 WatchSource:0}: Error finding container 9f7905ae12ae15e5ece646ff35484bc8d8dcf3b2e22af06ddb6105a22a4000a9: Status 404 returned error can't find the container with id 9f7905ae12ae15e5ece646ff35484bc8d8dcf3b2e22af06ddb6105a22a4000a9 Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.304924 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.305017 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.305056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.305085 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305137 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305181 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305205 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:43.305150245 +0000 UTC m=+21.234446183 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305215 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305261 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305269 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305273 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:43.305253747 +0000 UTC m=+21.234549855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305320 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:43.305313439 +0000 UTC m=+21.234609377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.305334 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:43.305328099 +0000 UTC m=+21.234624027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.406634 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.406827 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.406848 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.406862 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.406928 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:43.406910093 +0000 UTC m=+21.336206031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.505157 4811 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 20:56:42 crc kubenswrapper[4811]: W0216 20:56:42.505516 4811 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 20:56:42 crc kubenswrapper[4811]: W0216 20:56:42.505517 4811 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 20:56:42 crc kubenswrapper[4811]: W0216 20:56:42.505615 4811 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.506739 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.9:33830->38.102.83.9:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894d593906e3fe4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 20:56:23.223500772 +0000 UTC m=+1.152796750,LastTimestamp:2026-02-16 20:56:23.223500772 +0000 UTC m=+1.152796750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.651566 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:31:14.719448812 +0000 UTC Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.702734 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.702806 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.702954 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:42 crc kubenswrapper[4811]: E0216 20:56:42.703129 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.711826 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.713064 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.715386 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.716381 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 20:51:41 +0000 UTC, rotation deadline is 2026-11-02 14:25:43.654616032 +0000 UTC Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.716491 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6209h29m0.938130681s for next certificate rotation Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.716865 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.719032 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.720257 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.721483 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.723572 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.724939 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.725058 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.726815 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.728030 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.730320 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.731388 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.732446 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.734310 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.735419 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.737391 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.738337 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.739511 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.740226 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.741348 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.741851 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.742518 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.742938 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.743618 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.744040 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.744665 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.745310 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.745792 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.746409 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.746899 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.747408 4811 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.747514 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.748916 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.752035 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.752488 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.753961 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.755384 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.755938 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.757016 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.757693 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.758130 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.759049 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.760118 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.760749 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.761560 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.762093 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.763092 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.763837 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.764720 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.765161 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.765631 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.766497 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.767039 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.767956 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.809314 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.833520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b"} Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.833584 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0343dd417f9fef1f249a4754b972c5b3b6fd748a7c760072880f6e8602d322ed"} Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.834372 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.840031 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409"} Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.840107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16"} Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.840120 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dbf8e900d37b24a48c1338f0a1ddf6b2bb9dbc99b76e2abea268279737abf79f"} Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.841935 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9f7905ae12ae15e5ece646ff35484bc8d8dcf3b2e22af06ddb6105a22a4000a9"} Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.862782 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.882621 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.900902 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.917627 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.935290 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.951353 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.970038 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.984735 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:42 crc kubenswrapper[4811]: I0216 20:56:42.999656 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.013215 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.316033 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.316154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316301 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316316 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.316270685 +0000 UTC m=+23.245566613 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.316185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316365 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.316355758 +0000 UTC m=+23.245651696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.316384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316455 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316447 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316572 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.316548262 +0000 UTC m=+23.245844200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316468 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316596 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.316642 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.316625144 +0000 UTC m=+23.245921082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.417504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.417636 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.417652 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.417664 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.417711 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.417698585 +0000 UTC m=+23.346994523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.652293 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:45:29.902532772 +0000 UTC Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.702837 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:43 crc kubenswrapper[4811]: E0216 20:56:43.703010 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.836544 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.852225 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.858802 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.869028 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.883132 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.895577 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.908454 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.923767 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.939571 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.952915 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.960736 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:43 crc kubenswrapper[4811]: I0216 20:56:43.981547 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.003874 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.036469 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.060058 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.069033 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-55x7j"] Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.069462 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.070669 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fh2mx"] Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.071027 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mzmxb"] Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.071614 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.072156 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078286 4811 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078348 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078367 4811 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078413 4811 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078436 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078473 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078474 4811 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078530 4811 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078560 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078430 4811 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.078590 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mgctp"] Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.078629 4811 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078650 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078600 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.078586 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.079133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.087277 4811 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.087326 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.087604 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.088420 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.089824 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.089867 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.089939 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.090171 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.094216 4811 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.094243 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.094264 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.111618 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.134345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.161845 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.195237 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.216757 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-kubelet\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223279 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/746baa9e-089b-4907-9809-72705f44cd00-hosts-file\") pod \"node-resolver-55x7j\" (UID: \"746baa9e-089b-4907-9809-72705f44cd00\") " pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223298 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-os-release\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223314 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-hostroot\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-rootfs\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223344 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-proxy-tls\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223398 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-k8s-cni-cncf-io\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-multus-certs\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223585 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-cni-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223608 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-multus-daemon-config\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223629 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrlt6\" (UniqueName: \"kubernetes.io/projected/746baa9e-089b-4907-9809-72705f44cd00-kube-api-access-hrlt6\") pod \"node-resolver-55x7j\" (UID: \"746baa9e-089b-4907-9809-72705f44cd00\") " pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223662 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-os-release\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223710 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-socket-dir-parent\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223753 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-cni-multus\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fcv\" (UniqueName: \"kubernetes.io/projected/a946fefd-e014-48b1-995b-ef221a88bc73-kube-api-access-76fcv\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223822 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-system-cni-dir\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223880 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-system-cni-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223908 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-cni-bin\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223953 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-etc-kubernetes\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223973 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqkml\" (UniqueName: \"kubernetes.io/projected/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-kube-api-access-sqkml\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.223997 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224049 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-cnibin\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-netns\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224130 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-conf-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224273 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-cnibin\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224386 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-cni-binary-copy\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.224460 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbh2\" (UniqueName: \"kubernetes.io/projected/479f901f-0d27-49cb-8ce9-861848c4e0b7-kube-api-access-tcbh2\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.245732 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.273997 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.287435 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.300075 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.315509 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-os-release\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-hostroot\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325149 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-rootfs\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325174 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-proxy-tls\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325224 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-k8s-cni-cncf-io\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325249 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-multus-certs\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325281 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-cni-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325307 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-multus-daemon-config\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrlt6\" (UniqueName: \"kubernetes.io/projected/746baa9e-089b-4907-9809-72705f44cd00-kube-api-access-hrlt6\") pod \"node-resolver-55x7j\" (UID: \"746baa9e-089b-4907-9809-72705f44cd00\") " pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325330 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-hostroot\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325360 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325444 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-k8s-cni-cncf-io\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325493 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-os-release\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325355 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-rootfs\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-os-release\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325570 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-cni-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325588 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-os-release\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325598 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-multus-certs\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325572 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-cni-multus\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325618 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-cni-multus\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76fcv\" (UniqueName: \"kubernetes.io/projected/a946fefd-e014-48b1-995b-ef221a88bc73-kube-api-access-76fcv\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-system-cni-dir\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325883 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-system-cni-dir\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325896 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-socket-dir-parent\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-system-cni-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325986 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-etc-kubernetes\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.325996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-socket-dir-parent\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326042 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-system-cni-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326003 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqkml\" (UniqueName: \"kubernetes.io/projected/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-kube-api-access-sqkml\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-etc-kubernetes\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326092 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326116 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-cni-bin\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326134 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-cnibin\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326150 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-netns\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326168 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-conf-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326182 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-cni-bin\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326209 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-cnibin\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-run-netns\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326238 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-cnibin\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326252 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326285 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-cnibin\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-cni-binary-copy\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326352 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbh2\" (UniqueName: \"kubernetes.io/projected/479f901f-0d27-49cb-8ce9-861848c4e0b7-kube-api-access-tcbh2\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326285 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-multus-conf-dir\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326376 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-kubelet\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326396 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/746baa9e-089b-4907-9809-72705f44cd00-hosts-file\") pod \"node-resolver-55x7j\" (UID: \"746baa9e-089b-4907-9809-72705f44cd00\") " pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a946fefd-e014-48b1-995b-ef221a88bc73-host-var-lib-kubelet\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326465 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/746baa9e-089b-4907-9809-72705f44cd00-hosts-file\") pod \"node-resolver-55x7j\" (UID: \"746baa9e-089b-4907-9809-72705f44cd00\") " pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.326670 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.327168 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/479f901f-0d27-49cb-8ce9-861848c4e0b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.329562 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.330246 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-proxy-tls\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.348225 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.348754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqkml\" (UniqueName: \"kubernetes.io/projected/aa95b3fc-1bfa-44f3-b568-7f325b230c3c-kube-api-access-sqkml\") pod \"machine-config-daemon-fh2mx\" (UID: \"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\") " pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.361738 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.411959 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.423814 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa95b3fc_1bfa_44f3_b568_7f325b230c3c.slice/crio-58919977912caf7ff456d818b7adc87343cf3b1c027be04a787bcc768ae561cc WatchSource:0}: Error finding container 58919977912caf7ff456d818b7adc87343cf3b1c027be04a787bcc768ae561cc: Status 404 returned error can't find the container with id 58919977912caf7ff456d818b7adc87343cf3b1c027be04a787bcc768ae561cc Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.476056 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x2ggt"] Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.477127 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.478789 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.479465 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.479574 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.479663 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.480785 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.480997 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.481936 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.492563 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.507274 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.520334 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.534632 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.550490 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.573646 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.591104 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.604600 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.619449 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629109 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629219 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-var-lib-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629253 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629278 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-ovn\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovn-node-metrics-cert\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-netns\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629397 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-script-lib\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmx4\" (UniqueName: \"kubernetes.io/projected/e1bbcd0c-f192-4210-831c-82e87a4768a7-kube-api-access-8hmx4\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-bin\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629486 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-netd\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629511 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-etc-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-log-socket\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629597 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-systemd-units\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629619 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-env-overrides\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629647 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-node-log\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629670 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-config\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-slash\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629740 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-systemd\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.629764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-kubelet\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.642281 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.652793 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:09:48.123341873 +0000 UTC Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.660985 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.677681 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.689870 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.702782 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.702853 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.702976 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:44 crc kubenswrapper[4811]: E0216 20:56:44.703132 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.730931 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-env-overrides\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731410 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-systemd-units\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731451 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-node-log\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-config\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-slash\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731556 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-systemd-units\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731580 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-systemd\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731646 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-systemd\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-kubelet\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731715 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-kubelet\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731758 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-ovn\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731683 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-slash\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731785 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovn-node-metrics-cert\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731766 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-env-overrides\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-ovn\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-node-log\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-var-lib-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.731996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-var-lib-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732040 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-netns\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732074 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-script-lib\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-netns\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732111 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732124 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmx4\" (UniqueName: \"kubernetes.io/projected/e1bbcd0c-f192-4210-831c-82e87a4768a7-kube-api-access-8hmx4\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-bin\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732316 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-netd\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732325 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-bin\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732351 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-etc-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732365 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-netd\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-log-socket\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732414 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-etc-openvswitch\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-log-socket\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732523 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-config\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.732702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-script-lib\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.738626 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovn-node-metrics-cert\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.748922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmx4\" (UniqueName: \"kubernetes.io/projected/e1bbcd0c-f192-4210-831c-82e87a4768a7-kube-api-access-8hmx4\") pod \"ovnkube-node-x2ggt\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.807115 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:44 crc kubenswrapper[4811]: W0216 20:56:44.824771 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1bbcd0c_f192_4210_831c_82e87a4768a7.slice/crio-29472bcfd2ec457b40cacc17f1865ee2e7ec33788f857f742628e8b9ff741552 WatchSource:0}: Error finding container 29472bcfd2ec457b40cacc17f1865ee2e7ec33788f857f742628e8b9ff741552: Status 404 returned error can't find the container with id 29472bcfd2ec457b40cacc17f1865ee2e7ec33788f857f742628e8b9ff741552 Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.850498 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"29472bcfd2ec457b40cacc17f1865ee2e7ec33788f857f742628e8b9ff741552"} Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.853475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062"} Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.853547 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba"} Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.853561 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"58919977912caf7ff456d818b7adc87343cf3b1c027be04a787bcc768ae561cc"} Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.879617 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.898345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.915683 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.934870 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.936612 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.953010 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.971750 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:44 crc kubenswrapper[4811]: I0216 20:56:44.987232 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:44Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.002475 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.008708 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.017021 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-multus-daemon-config\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.029112 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.034535 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.047703 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.064690 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.080448 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.104518 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.297587 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.326903 4811 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.326960 4811 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.327029 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-cni-binary-copy podName:a946fefd-e014-48b1-995b-ef221a88bc73 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.827000856 +0000 UTC m=+23.756296794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-cni-binary-copy") pod "multus-mgctp" (UID: "a946fefd-e014-48b1-995b-ef221a88bc73") : failed to sync configmap cache: timed out waiting for the condition Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.327094 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-sysctl-allowlist podName:479f901f-0d27-49cb-8ce9-861848c4e0b7 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.827067567 +0000 UTC m=+23.756363715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-mzmxb" (UID: "479f901f-0d27-49cb-8ce9-861848c4e0b7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.327186 4811 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.327243 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-binary-copy podName:479f901f-0d27-49cb-8ce9-861848c4e0b7 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.827232231 +0000 UTC m=+23.756528429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-binary-copy") pod "multus-additional-cni-plugins-mzmxb" (UID: "479f901f-0d27-49cb-8ce9-861848c4e0b7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.340702 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.340874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.340980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341031 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:49.340976917 +0000 UTC m=+27.270272865 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341115 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.341120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341223 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:49.341174082 +0000 UTC m=+27.270470230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341285 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341287 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341346 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:49.341336176 +0000 UTC m=+27.270632374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341353 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341383 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.341504 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:49.341470249 +0000 UTC m=+27.270766217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.426688 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.436418 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76fcv\" (UniqueName: \"kubernetes.io/projected/a946fefd-e014-48b1-995b-ef221a88bc73-kube-api-access-76fcv\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.440862 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbh2\" (UniqueName: \"kubernetes.io/projected/479f901f-0d27-49cb-8ce9-861848c4e0b7-kube-api-access-tcbh2\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.442128 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.442412 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.442443 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.442465 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.442573 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:49.44254611 +0000 UTC m=+27.371842158 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.461321 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.534867 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.542366 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.543505 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrlt6\" (UniqueName: \"kubernetes.io/projected/746baa9e-089b-4907-9809-72705f44cd00-kube-api-access-hrlt6\") pod \"node-resolver-55x7j\" (UID: \"746baa9e-089b-4907-9809-72705f44cd00\") " pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.584309 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-55x7j" Feb 16 20:56:45 crc kubenswrapper[4811]: W0216 20:56:45.600241 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod746baa9e_089b_4907_9809_72705f44cd00.slice/crio-944380a10ed1401aa3f47bfba2a82220db73229f19e115856e2436b517885ed4 WatchSource:0}: Error finding container 944380a10ed1401aa3f47bfba2a82220db73229f19e115856e2436b517885ed4: Status 404 returned error can't find the container with id 944380a10ed1401aa3f47bfba2a82220db73229f19e115856e2436b517885ed4 Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.621608 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.653406 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:00:14.133192295 +0000 UTC Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.702276 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:45 crc kubenswrapper[4811]: E0216 20:56:45.702442 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.846516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.846589 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.846623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-cni-binary-copy\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.847764 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a946fefd-e014-48b1-995b-ef221a88bc73-cni-binary-copy\") pod \"multus-mgctp\" (UID: \"a946fefd-e014-48b1-995b-ef221a88bc73\") " pod="openshift-multus/multus-mgctp" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.847757 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.847771 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/479f901f-0d27-49cb-8ce9-861848c4e0b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzmxb\" (UID: \"479f901f-0d27-49cb-8ce9-861848c4e0b7\") " pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.859797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c"} Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.861798 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" exitCode=0 Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.861850 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.864332 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-55x7j" event={"ID":"746baa9e-089b-4907-9809-72705f44cd00","Type":"ContainerStarted","Data":"944380a10ed1401aa3f47bfba2a82220db73229f19e115856e2436b517885ed4"} Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.882258 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.894897 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.899045 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.924466 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mgctp" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.938214 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: I0216 20:56:45.964691 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4811]: W0216 20:56:45.986940 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda946fefd_e014_48b1_995b_ef221a88bc73.slice/crio-074726c4ad8be5dc3450cb9e1335be70ad6befca627fd3d2a190fcd0ea58de13 WatchSource:0}: Error finding container 074726c4ad8be5dc3450cb9e1335be70ad6befca627fd3d2a190fcd0ea58de13: Status 404 returned error can't find the container with id 074726c4ad8be5dc3450cb9e1335be70ad6befca627fd3d2a190fcd0ea58de13 Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.000285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.022939 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.043608 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.058426 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.079596 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.095720 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.110013 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.125045 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.145649 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.166571 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.185402 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.203604 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.220426 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.236062 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.257298 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.279652 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.307679 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.321077 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.332813 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.354499 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.368119 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.383363 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.657119 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:16:25.778638476 +0000 UTC Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.702802 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.702939 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:46 crc kubenswrapper[4811]: E0216 20:56:46.703173 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:46 crc kubenswrapper[4811]: E0216 20:56:46.703412 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.769727 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xwj8v"] Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.770269 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.772355 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.772852 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.772964 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.773522 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.786187 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.799606 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.812662 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.834801 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.855030 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.859483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnsms\" (UniqueName: \"kubernetes.io/projected/7a462664-d492-4632-bd4d-e1a890961995-kube-api-access-qnsms\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.859539 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a462664-d492-4632-bd4d-e1a890961995-host\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.859559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a462664-d492-4632-bd4d-e1a890961995-serviceca\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.871473 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.875893 4811 generic.go:334] "Generic (PLEG): container finished" podID="479f901f-0d27-49cb-8ce9-861848c4e0b7" containerID="7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b" exitCode=0 Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.875960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerDied","Data":"7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.875989 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerStarted","Data":"b1bafdb8697e611bc2cf76a73ae959ef76618388b5aead3e25c54ca5aa81e246"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.877980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerStarted","Data":"9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.878053 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerStarted","Data":"074726c4ad8be5dc3450cb9e1335be70ad6befca627fd3d2a190fcd0ea58de13"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.882288 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.882332 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.882344 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.882355 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.882365 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.882375 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.884153 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-55x7j" event={"ID":"746baa9e-089b-4907-9809-72705f44cd00","Type":"ContainerStarted","Data":"cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9"} Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.893140 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.910619 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.925032 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.940351 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.953450 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.960306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a462664-d492-4632-bd4d-e1a890961995-host\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.960388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a462664-d492-4632-bd4d-e1a890961995-serviceca\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.960526 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a462664-d492-4632-bd4d-e1a890961995-host\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.961615 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnsms\" (UniqueName: \"kubernetes.io/projected/7a462664-d492-4632-bd4d-e1a890961995-kube-api-access-qnsms\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.962101 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a462664-d492-4632-bd4d-e1a890961995-serviceca\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.975003 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.988110 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4811]: I0216 20:56:46.992129 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnsms\" (UniqueName: \"kubernetes.io/projected/7a462664-d492-4632-bd4d-e1a890961995-kube-api-access-qnsms\") pod \"node-ca-xwj8v\" (UID: \"7a462664-d492-4632-bd4d-e1a890961995\") " pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.009992 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.025093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.050824 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.060707 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.073927 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.087164 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xwj8v" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.097717 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: W0216 20:56:47.102694 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a462664_d492_4632_bd4d_e1a890961995.slice/crio-f0599c96118d45f30560a21e29352afdae6e610eb30ebf4cce7a69fcfb2f03ef WatchSource:0}: Error finding container f0599c96118d45f30560a21e29352afdae6e610eb30ebf4cce7a69fcfb2f03ef: Status 404 returned error can't find the container with id f0599c96118d45f30560a21e29352afdae6e610eb30ebf4cce7a69fcfb2f03ef Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.113037 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.125967 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.139559 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.157834 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.170638 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.199640 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.219633 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.234533 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.252232 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.254855 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.257778 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.279895 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.285044 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.308118 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.332066 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.361643 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.373483 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.392757 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.405442 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.420437 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.430880 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.443501 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.463345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.475309 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.499961 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.514685 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.549407 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.596112 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.629913 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.659039 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:18:59.688663802 +0000 UTC Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.674860 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.702912 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:47 crc kubenswrapper[4811]: E0216 20:56:47.703119 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.709118 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.740378 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.747116 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.786296 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.830959 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.868185 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.890249 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerStarted","Data":"29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5"} Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.892585 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xwj8v" event={"ID":"7a462664-d492-4632-bd4d-e1a890961995","Type":"ContainerStarted","Data":"843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a"} Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.892645 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xwj8v" event={"ID":"7a462664-d492-4632-bd4d-e1a890961995","Type":"ContainerStarted","Data":"f0599c96118d45f30560a21e29352afdae6e610eb30ebf4cce7a69fcfb2f03ef"} Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.923023 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4811]: E0216 20:56:47.923610 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:47 crc kubenswrapper[4811]: I0216 20:56:47.976265 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.008395 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.044651 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.045955 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.047036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.047078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.047091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.047235 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.113800 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.119973 4811 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.120259 4811 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.121566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.121593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.121604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.121621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.121654 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.136769 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.141084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.141116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.141143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.141160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.141171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.157957 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.162507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.162544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.162553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.162568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.162578 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.178324 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.182318 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.186465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.186543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.186572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.186614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.186643 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.202389 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.206716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.206764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.206778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.206797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.206817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.214368 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.235384 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.235505 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.237296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.237327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.237342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.237359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.237372 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.252614 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.292603 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.326941 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.340319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.340375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.340386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.340405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.340417 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.369409 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.414355 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.443336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.443437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.443447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.443461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.443472 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.447496 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.494032 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.535190 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.545899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.545954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.545969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.545988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.546004 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.566045 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.605771 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.647980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.648031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.648042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.648064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.648074 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.650094 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.659675 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:43:59.732403465 +0000 UTC Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.693701 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.702073 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.702116 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.702232 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:48 crc kubenswrapper[4811]: E0216 20:56:48.702400 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.731772 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.750476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.750524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.750537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.750556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.750567 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.771454 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.853609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.853657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.853666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.853689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.853705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.899303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.900645 4811 generic.go:334] "Generic (PLEG): container finished" podID="479f901f-0d27-49cb-8ce9-861848c4e0b7" containerID="29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5" exitCode=0 Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.901030 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerDied","Data":"29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.916119 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.933868 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.945383 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.955788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.955815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.955823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.955839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.955850 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:48Z","lastTransitionTime":"2026-02-16T20:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.961610 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4811]: I0216 20:56:48.970642 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.007855 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.046929 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.057982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.058020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.058031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.058046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.058056 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.087703 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.126158 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.160704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.160734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.160742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.160757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.160766 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.166031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.208018 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.278433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.278466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.278480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.278497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.278509 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.285489 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.297076 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.331738 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.368116 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.382411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.382440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.382451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.382469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.382479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.391062 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.391178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391241 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:57.39121708 +0000 UTC m=+35.320513028 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.391287 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391330 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391351 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391365 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.391373 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391407 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:57.391394804 +0000 UTC m=+35.320690752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391480 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391525 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:57.391513926 +0000 UTC m=+35.320809874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391532 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.391687 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:57.39165341 +0000 UTC m=+35.320949378 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.486038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.486109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.486129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.486157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.486176 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.492781 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.492967 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.493002 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.493022 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.493096 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:57.493075209 +0000 UTC m=+35.422371187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.589070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.589118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.589131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.589149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.589163 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.660230 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:36:23.264743283 +0000 UTC Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.691828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.691874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.691887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.691905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.691919 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.702441 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:49 crc kubenswrapper[4811]: E0216 20:56:49.702572 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.794889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.794951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.794971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.794997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.795015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.898273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.898339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.898362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.898393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.898412 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.907604 4811 generic.go:334] "Generic (PLEG): container finished" podID="479f901f-0d27-49cb-8ce9-861848c4e0b7" containerID="fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701" exitCode=0 Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.907663 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerDied","Data":"fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701"} Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.924839 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.950893 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.968508 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4811]: I0216 20:56:49.990387 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.001407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.001657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.001736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.001818 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.001898 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.020388 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.033923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.052889 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.073301 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.087082 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.101886 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.104368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.104396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.104407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.104423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.104433 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.115989 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.129139 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.144073 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.156030 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.170522 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.206225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.206259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.206268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.206283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.206293 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.308744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.308792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.308804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.308820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.308831 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.412770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.413036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.413048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.413064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.413076 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.516142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.516232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.516260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.516290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.516312 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.618747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.618787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.618795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.618811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.618820 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.660841 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:18:43.962469022 +0000 UTC Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.702294 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:50 crc kubenswrapper[4811]: E0216 20:56:50.702423 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.702294 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:50 crc kubenswrapper[4811]: E0216 20:56:50.702489 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.721256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.721299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.721312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.721334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.721347 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.823548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.823586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.823596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.823612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.823622 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.915024 4811 generic.go:334] "Generic (PLEG): container finished" podID="479f901f-0d27-49cb-8ce9-861848c4e0b7" containerID="0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa" exitCode=0 Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.915076 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerDied","Data":"0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.925806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.925850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.925865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.925885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.925899 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.933361 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.948997 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.968169 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4811]: I0216 20:56:50.984949 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.010347 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.028590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.028643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.028658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.028681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.028696 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.032989 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.057324 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.071819 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.087521 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.095696 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.110474 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.123149 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.157495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.157564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.157581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.157605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.157630 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.185459 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.202288 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.213028 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.230088 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.260482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.260507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.260515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.260528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.260538 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.362680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.362715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.362725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.362773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.362786 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.468681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.468736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.468749 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.468783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.468794 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.582245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.582283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.582292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.582306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.582320 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.661075 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:01:08.060313014 +0000 UTC Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.684275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.684321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.684332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.684349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.684359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.702690 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:51 crc kubenswrapper[4811]: E0216 20:56:51.702906 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.786872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.786936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.786952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.786975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.786990 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.835064 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.889015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.889062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.889077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.889094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.889108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.921395 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.921699 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.925889 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerStarted","Data":"ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.936457 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.957867 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.970408 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.990149 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.992641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.992674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.992684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.992698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.992708 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4811]: I0216 20:56:51.994178 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.006659 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.020849 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.037420 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.053579 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.066992 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.079599 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.091736 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.096268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.096298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.096310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.096327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.096339 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.112825 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.127732 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.146044 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.161250 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.177355 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.192814 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.202309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.202372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.202387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.202407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.202506 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.213249 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.227643 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.245108 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.262384 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.278285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.296341 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.305487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.305542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.305554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.305576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.305589 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.319929 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.333719 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.359710 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.378633 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.396856 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.408058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.408563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.408586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.408610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.408628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.413358 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.429060 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.510944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.511018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.511038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.511068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.511087 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.614856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.614925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.614938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.614961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.614975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.662259 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:17:54.925606073 +0000 UTC Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.702952 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.703020 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:52 crc kubenswrapper[4811]: E0216 20:56:52.703241 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:52 crc kubenswrapper[4811]: E0216 20:56:52.703423 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.718603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.718658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.718676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.718699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.718721 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.728133 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.747774 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.768086 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.793988 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.812527 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.820896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.820980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.821010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.821048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.821077 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.835689 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.856369 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.877378 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.914914 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.925181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.925253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.925271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.925297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.925315 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.933111 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.933302 4811 generic.go:334] "Generic (PLEG): container finished" podID="479f901f-0d27-49cb-8ce9-861848c4e0b7" containerID="ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37" exitCode=0 Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.933357 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerDied","Data":"ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37"} Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.933491 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.933948 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.947940 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.962537 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.968355 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:52 crc kubenswrapper[4811]: I0216 20:56:52.988751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.004679 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.021017 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.028612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.028646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.028657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.028676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.028688 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.033944 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.048026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.064335 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.083805 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.097769 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.108577 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.119785 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.131930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.131970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.131983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.132005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.132018 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.142291 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.157426 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.174490 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.187783 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.202838 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.217584 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.229275 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.234080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.234126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.234140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.234160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.234173 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.243526 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.337280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.337331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.337340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.337358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.337373 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.441053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.441126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.441145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.441178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.441226 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.544954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.545036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.545057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.545108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.545133 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.648949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.649057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.649095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.649127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.649153 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.662431 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:34:18.977217601 +0000 UTC Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.701946 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:53 crc kubenswrapper[4811]: E0216 20:56:53.702190 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.753091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.753163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.753189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.753263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.753292 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.856413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.856495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.856518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.856546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.856565 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.951408 4811 generic.go:334] "Generic (PLEG): container finished" podID="479f901f-0d27-49cb-8ce9-861848c4e0b7" containerID="531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e" exitCode=0 Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.951565 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerDied","Data":"531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e"} Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.951623 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.995806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.995862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.995881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.995906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4811]: I0216 20:56:53.995923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.016958 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.035677 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.054846 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.067427 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.082570 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.099998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.100047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.100062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.100087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.100102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.105098 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.116923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.135734 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.155734 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.170294 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.183380 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.195380 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.202659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.202710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.202720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.202740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.202757 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.210841 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.234545 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.248941 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.306669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.306736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.306753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.306778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.306795 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.409540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.409597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.409608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.409632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.409646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.512228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.512270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.512281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.512298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.512309 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.615501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.615545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.615560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.615586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.615603 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.662732 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:43:04.825196072 +0000 UTC Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.702687 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:54 crc kubenswrapper[4811]: E0216 20:56:54.702882 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.703140 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:54 crc kubenswrapper[4811]: E0216 20:56:54.703267 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.717908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.717949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.717965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.717985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.718000 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.822660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.823105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.823347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.823556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.823748 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.825731 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.926908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.926959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.926975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.926997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.927012 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.960460 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" event={"ID":"479f901f-0d27-49cb-8ce9-861848c4e0b7","Type":"ContainerStarted","Data":"414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8"} Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.960577 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:54 crc kubenswrapper[4811]: I0216 20:56:54.990228 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.012523 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.030304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.030355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.030364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.030382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.030395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.036703 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.050833 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.075337 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.093797 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.107937 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.120780 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.133426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.133474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.133488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.133512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.133529 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.141574 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.162812 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.176478 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.194644 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.213710 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.235439 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.236623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.236707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.236720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.236742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.236758 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.255090 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.327584 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.328656 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.329462 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.330324 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.330385 4811 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.330878 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.331432 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.331865 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.331899 4811 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.339247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.339321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.339347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.339385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.339410 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.443474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.443545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.443563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.443590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.443609 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.547724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.547771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.547785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.547804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.547818 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.652037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.652143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.652171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.652245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.652274 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.662937 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:09:55.306715049 +0000 UTC Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.702435 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:55 crc kubenswrapper[4811]: E0216 20:56:55.702663 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.755968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.756036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.756059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.756096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.756122 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.860471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.860586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.860614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.860657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.860686 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.963491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.963546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.963563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.963591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.963609 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.969385 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/0.log" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.973754 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" exitCode=1 Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.973846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31"} Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.975430 4811 scope.go:117] "RemoveContainer" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" Feb 16 20:56:55 crc kubenswrapper[4811]: I0216 20:56:55.998863 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.025602 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.050184 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.067625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.067927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.068015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.068156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.068272 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.088389 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141543 6082 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141670 6082 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:55.142587 6082 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:55.142599 6082 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:55.142623 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:55.142638 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:55.142643 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:55.142660 6082 factory.go:656] Stopping watch factory\\\\nI0216 20:56:55.142673 6082 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:55.142683 6082 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:55.142691 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:55.142702 6082 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:55.142709 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:55.142718 6082 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.105352 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.124367 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.137705 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.151423 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.164158 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.170332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.170384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.170402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.170425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.170443 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.177828 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.191905 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.217463 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.230420 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.249417 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.267555 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.274035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.274097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.274114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.274139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.274153 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.377771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.377816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.377826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.377848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.377859 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.480949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.481579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.481767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.481926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.482062 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.585958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.586423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.586634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.586759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.586843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.658151 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr"] Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.658644 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.660464 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.660523 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.664128 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:03:25.015378829 +0000 UTC Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.673344 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.689221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.689539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.689639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.689814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.689921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.692313 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.703088 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.703090 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:56 crc kubenswrapper[4811]: E0216 20:56:56.703260 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:56 crc kubenswrapper[4811]: E0216 20:56:56.703338 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.710787 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.723391 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.738727 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.753432 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.765543 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.771366 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91ed3265-a583-4b6c-bb05-52f5b758b44d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.771428 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2tpg\" (UniqueName: \"kubernetes.io/projected/91ed3265-a583-4b6c-bb05-52f5b758b44d-kube-api-access-w2tpg\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.771615 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91ed3265-a583-4b6c-bb05-52f5b758b44d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.771679 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91ed3265-a583-4b6c-bb05-52f5b758b44d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.779934 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.792454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.792522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.792544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.792571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.792597 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.808752 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.834358 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141543 6082 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141670 6082 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:55.142587 6082 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:55.142599 6082 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:55.142623 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:55.142638 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:55.142643 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:55.142660 6082 factory.go:656] Stopping watch factory\\\\nI0216 20:56:55.142673 6082 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:55.142683 6082 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:55.142691 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:55.142702 6082 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:55.142709 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:55.142718 6082 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.848401 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.863941 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.872717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91ed3265-a583-4b6c-bb05-52f5b758b44d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.872795 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2tpg\" (UniqueName: \"kubernetes.io/projected/91ed3265-a583-4b6c-bb05-52f5b758b44d-kube-api-access-w2tpg\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.872857 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91ed3265-a583-4b6c-bb05-52f5b758b44d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.872886 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91ed3265-a583-4b6c-bb05-52f5b758b44d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.873724 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91ed3265-a583-4b6c-bb05-52f5b758b44d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.873901 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91ed3265-a583-4b6c-bb05-52f5b758b44d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.880706 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91ed3265-a583-4b6c-bb05-52f5b758b44d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.895585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.895640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.895651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.895673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.895686 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.897825 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2tpg\" (UniqueName: \"kubernetes.io/projected/91ed3265-a583-4b6c-bb05-52f5b758b44d-kube-api-access-w2tpg\") pod \"ovnkube-control-plane-749d76644c-l89mr\" (UID: \"91ed3265-a583-4b6c-bb05-52f5b758b44d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.899181 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.931225 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.948767 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.969449 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.970851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.996131 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/0.log" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.997369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.997413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.997422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.997441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4811]: I0216 20:56:56.997452 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.001289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.002325 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.022273 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.036280 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.051361 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.064573 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.078638 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.094892 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.099841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.099902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.099916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.099937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.099951 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.108119 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.123801 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.137397 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.151576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.163035 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.177891 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.190883 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.202400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.202440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.202452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.202471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.202485 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.209588 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141543 6082 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141670 6082 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:55.142587 6082 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:55.142599 6082 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:55.142623 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:55.142638 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:55.142643 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:55.142660 6082 factory.go:656] Stopping watch factory\\\\nI0216 20:56:55.142673 6082 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:55.142683 6082 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:55.142691 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:55.142702 6082 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:55.142709 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:55.142718 6082 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.223887 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.237640 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.305814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.305878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.305895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.305923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.305943 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.408682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.408720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.408731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.408746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.408759 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.487359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.487463 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.487511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.487534 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487617 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487601 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.487563674 +0000 UTC m=+51.416859612 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487668 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.487653266 +0000 UTC m=+51.416949204 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487705 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487737 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487759 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487769 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487817 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.48779916 +0000 UTC m=+51.417095128 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.487847 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.48783366 +0000 UTC m=+51.417129638 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.510660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.510707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.510716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.510732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.510742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.588979 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.589272 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.589314 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.589328 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.589406 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.589380623 +0000 UTC m=+51.518676751 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.613892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.614003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.614120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.614169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.614260 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.665312 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:30:49.644550006 +0000 UTC Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.702016 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:57 crc kubenswrapper[4811]: E0216 20:56:57.702264 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.721052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.721105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.721121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.721430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.721519 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.825710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.825767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.825778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.825799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.825813 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.928709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.928770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.928783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.928808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4811]: I0216 20:56:57.928822 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.007883 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/1.log" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.008832 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/0.log" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.012328 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8" exitCode=1 Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.012468 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.012542 4811 scope.go:117] "RemoveContainer" containerID="64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.013942 4811 scope.go:117] "RemoveContainer" containerID="763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.014328 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.016974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" event={"ID":"91ed3265-a583-4b6c-bb05-52f5b758b44d","Type":"ContainerStarted","Data":"12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.017016 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" event={"ID":"91ed3265-a583-4b6c-bb05-52f5b758b44d","Type":"ContainerStarted","Data":"5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.017037 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" event={"ID":"91ed3265-a583-4b6c-bb05-52f5b758b44d","Type":"ContainerStarted","Data":"006560af74795783419ec8830b5557a7da88400bf5db1af80732a931c6f8c430"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.030170 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.031232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.031308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.031335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.031370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.031396 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.052939 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.066271 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.082281 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.106106 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.121013 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.133644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.133687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.133697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.133718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.133731 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.144662 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7nk7k"] Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.145214 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.145280 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.145894 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.162054 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.178576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.189798 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.197052 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.197299 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hffc\" (UniqueName: \"kubernetes.io/projected/1b4c0a11-23d9-412e-a5d8-120d622bef57-kube-api-access-8hffc\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.203779 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.218969 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.238443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.238506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.238522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.238546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.238559 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.240275 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141543 6082 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141670 6082 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:55.142587 6082 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:55.142599 6082 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:55.142623 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:55.142638 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:55.142643 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:55.142660 6082 factory.go:656] Stopping watch factory\\\\nI0216 20:56:55.142673 6082 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:55.142683 6082 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:55.142691 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:55.142702 6082 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:55.142709 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:55.142718 6082 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.251435 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.266776 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.285860 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.298092 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.298170 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hffc\" (UniqueName: \"kubernetes.io/projected/1b4c0a11-23d9-412e-a5d8-120d622bef57-kube-api-access-8hffc\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.298346 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.298450 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.798427649 +0000 UTC m=+36.727723597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.302817 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.317493 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.317529 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hffc\" (UniqueName: \"kubernetes.io/projected/1b4c0a11-23d9-412e-a5d8-120d622bef57-kube-api-access-8hffc\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.329374 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.342364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.342423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.342442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.342466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.342485 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.343917 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.360998 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.374581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.379412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.379542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.379571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.379608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.379633 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.397821 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.398641 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64406200372bb47d840882618de81aacde2fc9a11b6e64b7d377b884776aca31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141543 6082 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 20:56:55.141670 6082 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:55.142587 6082 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:55.142599 6082 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:55.142623 6082 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:55.142638 6082 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:55.142643 6082 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:55.142660 6082 factory.go:656] Stopping watch factory\\\\nI0216 20:56:55.142673 6082 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:55.142683 6082 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:55.142691 6082 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:55.142702 6082 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:55.142709 6082 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:55.142718 6082 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.404261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.404361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.404386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.404415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.404445 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.422055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.427405 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.432852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.432944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.432962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.432983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.432996 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.440247 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.448272 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.452624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.452677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.452691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.452712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.452726 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.465184 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.469088 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.473468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.473538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.473555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.473574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.473601 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.490188 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.490369 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.492648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.492696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.492710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.492732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.492749 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.494611 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.510041 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.525363 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.546262 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.565845 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.584464 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.595951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.596161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.596285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.596410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.596504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.605109 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.665887 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:45:41.039616791 +0000 UTC Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.699153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.699252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.699273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.699302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.699323 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.702462 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.702649 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.702729 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.702927 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.803490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.804406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.804672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.804802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.804539 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.805304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: E0216 20:56:58.805427 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.805374693 +0000 UTC m=+37.734670631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.805420 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.909188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.909299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.909319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.909355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4811]: I0216 20:56:58.909379 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.012532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.012606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.012625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.012658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.012679 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.024894 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/1.log" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.032370 4811 scope.go:117] "RemoveContainer" containerID="763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8" Feb 16 20:56:59 crc kubenswrapper[4811]: E0216 20:56:59.032634 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.057534 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.082646 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.116686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.116746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.116760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.116778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.116790 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.118936 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.135645 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.154700 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.174669 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.215988 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.220989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.221055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.221076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.221104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.221126 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.243470 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.267071 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.283913 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.306863 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.325635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.325701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.325724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.325757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.325781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.333979 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.363024 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.392492 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.417136 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.431035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.431117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.431137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.431169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.431191 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.444810 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.466572 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.535805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.535879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.535898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.535926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.535948 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.640269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.640337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.640360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.640387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.640406 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.666066 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:03:35.621629544 +0000 UTC Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.702608 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.702777 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:59 crc kubenswrapper[4811]: E0216 20:56:59.702850 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:56:59 crc kubenswrapper[4811]: E0216 20:56:59.703090 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.743631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.743709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.743730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.743776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.743802 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.818551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:56:59 crc kubenswrapper[4811]: E0216 20:56:59.818920 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:59 crc kubenswrapper[4811]: E0216 20:56:59.819100 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.819034626 +0000 UTC m=+39.748330594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.847576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.847639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.847654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.847678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.847693 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.950893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.950975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.951000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.951034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4811]: I0216 20:56:59.951055 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.054708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.054775 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.054795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.054826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.054850 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.159085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.159147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.159167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.159217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.159240 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.263408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.263851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.264003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.264146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.264359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.375633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.375716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.375736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.375767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.375787 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.479628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.479699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.479720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.479750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.479769 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.582878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.582933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.582944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.582973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.582989 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.666656 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:18:43.642984624 +0000 UTC Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.686317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.686365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.686383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.686413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.686432 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.702736 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.702931 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:00 crc kubenswrapper[4811]: E0216 20:57:00.703167 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:00 crc kubenswrapper[4811]: E0216 20:57:00.703395 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.789862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.789915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.789929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.789949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.789963 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.893304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.893387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.893408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.893442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.893464 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.996954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.997024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.997052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.997081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4811]: I0216 20:57:00.997097 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.099735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.099785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.099796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.099812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.099824 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.203839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.203913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.203930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.203960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.203979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.307430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.307499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.307521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.307550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.307566 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.410765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.410850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.410868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.410897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.410917 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.515683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.515734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.515747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.515769 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.515788 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.619261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.619304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.619315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.619356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.619369 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.667796 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:13:46.144955187 +0000 UTC Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.702741 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.702768 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:01 crc kubenswrapper[4811]: E0216 20:57:01.703018 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:01 crc kubenswrapper[4811]: E0216 20:57:01.703454 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.723343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.723408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.723434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.723469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.723492 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.827925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.828014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.828044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.828080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.828107 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.841524 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:01 crc kubenswrapper[4811]: E0216 20:57:01.841895 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:01 crc kubenswrapper[4811]: E0216 20:57:01.842076 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.842039514 +0000 UTC m=+43.771335492 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.931737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.931812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.931834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.931863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4811]: I0216 20:57:01.931885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.035430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.035836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.035994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.036145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.036335 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.140747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.140815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.140840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.140876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.140905 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.244362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.244430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.244449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.244476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.244504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.347939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.347994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.348008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.348032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.348049 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.451470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.451535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.451547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.451564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.451576 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.554384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.554738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.554833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.554939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.555034 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.658010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.658080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.658102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.658134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.658156 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.668870 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:33:41.199268987 +0000 UTC Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.702933 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:02 crc kubenswrapper[4811]: E0216 20:57:02.703129 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.703528 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:02 crc kubenswrapper[4811]: E0216 20:57:02.703633 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.726187 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.747157 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.762102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.762165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.762184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.762245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.762268 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.775416 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.803614 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.829650 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.848966 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.866780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.866853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.866873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.866898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.866943 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.867499 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.887865 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.905757 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.935557 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.951653 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.969514 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.978794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.978863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.978882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.978908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4811]: I0216 20:57:02.978926 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.014838 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.040262 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.060464 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.070947 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.082276 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.082695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.082749 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.082760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.082781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.082795 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.186150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.186263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.186286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.186350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.186370 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.289916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.290283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.290436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.290542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.290629 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.394589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.394665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.394684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.394713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.394736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.497572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.497642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.497663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.497696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.497720 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.600633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.600699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.600722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.600753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.600775 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.669384 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:00:44.47629687 +0000 UTC Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.702185 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.702192 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:03 crc kubenswrapper[4811]: E0216 20:57:03.702792 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:03 crc kubenswrapper[4811]: E0216 20:57:03.702832 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.704615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.704649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.704661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.704679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.704690 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.807790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.807858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.807873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.807895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.807911 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.911065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.911134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.911153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.911184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4811]: I0216 20:57:03.911247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.014723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.014926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.014947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.014985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.015004 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.123871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.123946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.123969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.124001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.124022 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.228144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.228268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.228291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.228327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.228347 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.331877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.331986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.332003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.332028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.332045 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.435753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.435823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.435846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.435879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.435906 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.539431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.539524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.539544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.539578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.539597 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.643035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.643096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.643118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.643149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.643167 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.670074 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:40:20.351801237 +0000 UTC Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.702821 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.702825 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:04 crc kubenswrapper[4811]: E0216 20:57:04.703029 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:04 crc kubenswrapper[4811]: E0216 20:57:04.703490 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.746739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.746785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.746794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.746834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.746851 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.850191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.850282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.850301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.850332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.850353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.954311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.954386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.954406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.954434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4811]: I0216 20:57:04.954455 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.057315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.057389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.057408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.057435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.057486 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.161025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.161090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.161108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.161137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.161158 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.265085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.265161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.265179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.265254 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.265275 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.369281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.369345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.369359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.369383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.369399 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.473133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.473257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.473278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.473306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.473323 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.577534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.577627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.577654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.577692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.577720 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.670763 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:20:11.235640682 +0000 UTC Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.682387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.682450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.682467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.682496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.682514 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.702048 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.702137 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:05 crc kubenswrapper[4811]: E0216 20:57:05.702287 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:05 crc kubenswrapper[4811]: E0216 20:57:05.702447 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.786151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.786251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.786270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.786301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.786320 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.889910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.889983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.890003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.890033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.890055 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.894144 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:05 crc kubenswrapper[4811]: E0216 20:57:05.894491 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:05 crc kubenswrapper[4811]: E0216 20:57:05.894639 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.894598175 +0000 UTC m=+51.823894183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.993647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.993734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.993755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.993787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4811]: I0216 20:57:05.993808 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.096859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.096927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.096941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.096965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.096979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.200392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.200472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.200490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.200525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.200544 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.304529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.304616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.304639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.304672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.304696 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.409762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.409827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.409848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.409895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.409937 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.517658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.517789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.517821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.517861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.517899 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.621328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.621400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.621420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.621446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.621464 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.671357 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:15:55.842664424 +0000 UTC Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.702987 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.703143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:06 crc kubenswrapper[4811]: E0216 20:57:06.703246 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:06 crc kubenswrapper[4811]: E0216 20:57:06.703433 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.724817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.724894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.724922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.724956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.724981 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.828392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.828489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.828514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.828551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.828580 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.931456 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.931530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.931549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.931581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4811]: I0216 20:57:06.931601 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.035096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.035176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.035207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.035308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.035336 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.139339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.139406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.139427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.139457 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.139518 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.242914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.243012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.243033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.243070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.243091 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.346708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.346760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.346773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.346793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.346806 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.449985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.450053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.450071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.450098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.450117 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.553379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.553460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.553484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.553536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.553556 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.657672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.657795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.657817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.657850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.657874 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.672174 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:16:17.014007477 +0000 UTC Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.702751 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.702751 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:07 crc kubenswrapper[4811]: E0216 20:57:07.703021 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:07 crc kubenswrapper[4811]: E0216 20:57:07.703315 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.761523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.761625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.761649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.761682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.761710 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.865428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.865518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.865545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.865586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.865626 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.969327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.969417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.969440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.969471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4811]: I0216 20:57:07.969496 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.072815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.072891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.072910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.072940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.072963 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.175916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.175972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.175986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.176007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.176021 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.278604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.278655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.278664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.278680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.278690 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.381907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.381958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.381974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.381993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.382006 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.484416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.484484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.484502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.484529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.484550 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.587617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.587670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.587679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.587696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.587708 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.617172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.617256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.617270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.617290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.617569 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.630786 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.636162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.636268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.636294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.636328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.636353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.657045 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.666799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.666878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.666896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.666923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.666940 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.672491 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:40:44.332854802 +0000 UTC Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.680240 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.685623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.685692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.685706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.685723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.685734 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.701110 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.702251 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.702286 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.702411 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.702541 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.706135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.706217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.706229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.706250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.706264 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.726693 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4811]: E0216 20:57:08.726869 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.729338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.729393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.729407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.729429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.729447 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.832922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.832992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.833009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.833038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.833057 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.936312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.936398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.936418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.936447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4811]: I0216 20:57:08.936471 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.039514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.039571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.039588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.039613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.039634 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.142718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.142765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.142783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.142806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.142823 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.246335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.246400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.246431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.246463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.246485 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.350599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.350667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.350680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.350707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.350722 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.453735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.453806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.453824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.453852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.453871 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.557108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.557312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.557329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.557348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.557361 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.660899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.660972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.660997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.661033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.661062 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.672641 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:01:44.766173682 +0000 UTC Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.702348 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.702403 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:09 crc kubenswrapper[4811]: E0216 20:57:09.702508 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:09 crc kubenswrapper[4811]: E0216 20:57:09.702728 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.764521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.764587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.764605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.764669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.764689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.868717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.868790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.868819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.868851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.868875 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.972792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.972844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.972856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.972896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4811]: I0216 20:57:09.972908 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.075927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.076023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.076039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.076060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.076074 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.180378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.180463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.180483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.181010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.181089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.285592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.285687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.285717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.285748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.285766 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.388983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.389033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.389045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.389062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.389075 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.492312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.492381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.492400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.492434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.492462 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.569686 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.582715 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.596386 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.597036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.597238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.597355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.597476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.597595 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.617058 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.636324 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.655956 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.672797 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:39:43.187624616 +0000 UTC Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.676026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.696030 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.701082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.701327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.701501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.701639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.701764 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.702338 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:10 crc kubenswrapper[4811]: E0216 20:57:10.702470 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.702826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:10 crc kubenswrapper[4811]: E0216 20:57:10.703702 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.704318 4811 scope.go:117] "RemoveContainer" containerID="763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.713813 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.727042 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.740073 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.762320 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.785048 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.801799 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.804754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.804796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.804807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.804821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.804831 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.814599 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.834613 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.855313 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.872896 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.895632 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.908645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.908732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.908763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.908811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4811]: I0216 20:57:10.908838 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.012239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.012280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.012294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.012314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.012328 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.084975 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/1.log" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.088629 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.110870 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.118133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.118257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.118285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.118322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.118351 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.131515 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.155937 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.177909 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.195695 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.209295 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.223781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.223828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.223840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.223856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.223867 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.247139 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.268016 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.282330 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.299105 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.319581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.326392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.326453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.326464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.326487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.326498 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.334389 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.349171 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.362192 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.376671 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.396488 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.412110 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.429403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.429462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.429475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.429500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.429514 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.432542 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:11Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.531741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.531787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.531799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.531819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.531831 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.701661 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:52:13.7898093 +0000 UTC Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.701842 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:11 crc kubenswrapper[4811]: E0216 20:57:11.701979 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.701851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:11 crc kubenswrapper[4811]: E0216 20:57:11.702332 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.705763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.705803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.705814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.705830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.705842 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.808219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.808282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.808300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.808323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.808337 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.911155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.911230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.911242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.911330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4811]: I0216 20:57:11.911343 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.014350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.014597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.014750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.014829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.014921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.094732 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/2.log" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.095396 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/1.log" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.098261 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be" exitCode=1 Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.098321 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.098406 4811 scope.go:117] "RemoveContainer" containerID="763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.099263 4811 scope.go:117] "RemoveContainer" containerID="3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be" Feb 16 20:57:12 crc kubenswrapper[4811]: E0216 20:57:12.099538 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.113614 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.117823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.118007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.118149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.118310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.118450 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.131442 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.145055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.168584 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.182969 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.195081 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.208752 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.221454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.221485 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.221497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.221514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.221524 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.226038 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.245923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.260216 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.272158 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.289175 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.307293 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.335592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.335654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.335669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.335694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.335912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.337788 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.357243 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.374055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.388802 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.409232 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.439963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.440025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.440052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.440085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.440108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.543338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.543422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.543449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.543488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.543515 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.647259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.647346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.647371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.647408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.647430 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.702300 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:44:24.454775639 +0000 UTC Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.702421 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:12 crc kubenswrapper[4811]: E0216 20:57:12.702695 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.702783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:12 crc kubenswrapper[4811]: E0216 20:57:12.703011 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.726342 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.751310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.751607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.752009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.752278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.752443 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.759450 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.777668 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.795826 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.822924 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.841591 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.855288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.855323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.855335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.855352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.855364 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.867806 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.885243 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.905200 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.925434 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.943159 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.958564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.958895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.959080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.959292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.959698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.968326 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:12 crc kubenswrapper[4811]: I0216 20:57:12.987938 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.007386 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.030653 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.060969 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.062765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.062842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.062861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.062893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.062914 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.079690 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.098323 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.109626 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/2.log" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.166174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.166251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.166267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.166289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.166304 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.268993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.269055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.269068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.269083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.269093 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.373130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.373177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.373186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.373223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.373235 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.476533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.476586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.476598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.476622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.476637 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.517799 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.517930 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518005 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:45.517964756 +0000 UTC m=+83.447260704 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.518083 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.518154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518175 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518261 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518287 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518288 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518377 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:45.518356666 +0000 UTC m=+83.447652804 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518402 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:45.518391596 +0000 UTC m=+83.447687794 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518428 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.518601 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:45.518564121 +0000 UTC m=+83.447860099 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.583142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.583265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.583289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.583319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.583338 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.619125 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.619404 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.619453 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.619475 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.619575 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:45.61954664 +0000 UTC m=+83.548842608 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.687660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.687735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.687757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.687784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.687802 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.702366 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.702363 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.702640 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:25:10.997704114 +0000 UTC Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.702542 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.702959 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.791814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.791883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.791901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.791930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.791949 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.894617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.894682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.894695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.894722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.894737 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.921630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.921893 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: E0216 20:57:13.922028 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:29.921995314 +0000 UTC m=+67.851291262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.997730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.997886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.997906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.997938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4811]: I0216 20:57:13.997957 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.101937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.101996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.102008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.102029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.102042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.204468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.204536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.204547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.204566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.204578 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.307786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.307972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.307994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.308449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.308467 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.411812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.411891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.411917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.411966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.411985 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.560330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.560414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.560458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.560536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.560560 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.664354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.664402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.664411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.664426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.664436 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.702907 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.702988 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:14 crc kubenswrapper[4811]: E0216 20:57:14.703069 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:14 crc kubenswrapper[4811]: E0216 20:57:14.703154 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.703575 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:09:39.626372856 +0000 UTC Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.768728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.768786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.768802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.768824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.768839 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.871861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.871945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.871966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.871998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.872022 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.974283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.974333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.974346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.974372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4811]: I0216 20:57:14.974387 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.078347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.078526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.078552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.078589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.078613 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.182064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.182133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.182147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.182172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.182194 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.286489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.286560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.286572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.286596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.286618 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.389531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.389962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.389980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.390009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.390042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.492725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.492809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.492835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.492862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.492881 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.596030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.596096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.596115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.596138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.596158 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.698994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.699065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.699084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.699109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.699130 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.702661 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.702668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:15 crc kubenswrapper[4811]: E0216 20:57:15.702828 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:15 crc kubenswrapper[4811]: E0216 20:57:15.703004 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.704686 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:39:53.569826695 +0000 UTC Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.802776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.802873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.802924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.802951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.802972 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.907843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.907913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.907936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.907965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4811]: I0216 20:57:15.907983 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.011834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.011944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.011963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.012035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.012065 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.115168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.115241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.115254 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.115273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.115284 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.218671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.218770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.218787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.218814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.218833 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.322421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.322488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.322506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.322536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.322558 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.425527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.425590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.425608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.425633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.425651 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.529474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.529572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.529663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.529747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.529799 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.633593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.633649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.633665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.633688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.633705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.702435 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.702524 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:16 crc kubenswrapper[4811]: E0216 20:57:16.702604 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:16 crc kubenswrapper[4811]: E0216 20:57:16.702682 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.705328 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:13:47.885657659 +0000 UTC Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.736760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.736819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.736838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.736863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.736883 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.839890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.839953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.839972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.840000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.840023 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.942920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.942971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.942987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.943009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4811]: I0216 20:57:16.943024 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.046492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.046557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.046574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.046593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.046607 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.148796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.148863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.148877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.148899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.148913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.251799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.251896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.251922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.251957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.251981 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.355576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.355644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.355665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.355692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.355712 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.459506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.459767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.459792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.459821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.459844 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.563573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.563626 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.563640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.563658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.563671 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.666117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.666187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.666252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.666283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.666305 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.702862 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.702876 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:17 crc kubenswrapper[4811]: E0216 20:57:17.703107 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:17 crc kubenswrapper[4811]: E0216 20:57:17.703220 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.706018 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:20:36.980671686 +0000 UTC Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.769290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.769379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.769408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.769440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.769463 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.871986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.872031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.872043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.872063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.872078 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.974813 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.974900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.974929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.974962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4811]: I0216 20:57:17.974986 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.077631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.077692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.077707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.077728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.077745 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.179725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.179771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.179815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.179839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.179854 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.282468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.282508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.282521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.282539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.282552 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.385796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.385869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.385881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.385900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.385913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.488723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.488804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.488819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.488843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.488856 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.592398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.592957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.593352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.593728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.594043 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.698120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.698726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.698974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.699268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.699506 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.702626 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.702732 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:18 crc kubenswrapper[4811]: E0216 20:57:18.702817 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:18 crc kubenswrapper[4811]: E0216 20:57:18.702912 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.706745 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:07:57.661384678 +0000 UTC Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.802487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.802544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.802560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.802580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.802599 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.905414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.905477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.905495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.905520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.905541 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.907840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.907877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.907887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.907901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.907914 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: E0216 20:57:18.927955 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.932917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.932974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.932994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.933022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.933042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: E0216 20:57:18.946701 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.951178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.951257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.951272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.951290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.951304 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: E0216 20:57:18.971495 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.976600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.976665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.976686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.976715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.976734 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4811]: E0216 20:57:18.994571 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.999800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4811]: I0216 20:57:18.999857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:18.999878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:18.999903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:18.999936 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: E0216 20:57:19.018955 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:19Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:19 crc kubenswrapper[4811]: E0216 20:57:19.019254 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.021764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.021825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.021842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.021869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.021885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.126102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.126170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.126185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.126233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.126253 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.230156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.230286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.230308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.230341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.230364 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.333623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.333693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.333711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.333739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.333758 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.436759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.436836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.436855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.436884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.436908 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.540170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.540288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.540307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.540336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.540354 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.644111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.644262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.644296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.644328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.644350 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.702483 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.702515 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:19 crc kubenswrapper[4811]: E0216 20:57:19.702748 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:19 crc kubenswrapper[4811]: E0216 20:57:19.702880 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.707763 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:15:15.264307412 +0000 UTC Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.750676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.751159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.751389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.751634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.751835 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.855530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.855965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.856085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.856219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.856335 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.959720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.960038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.960129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.960226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4811]: I0216 20:57:19.960298 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.064079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.064141 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.064158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.064182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.064242 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.167364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.167433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.167455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.167478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.167496 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.271077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.271153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.271170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.271216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.271247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.375514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.375582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.375603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.375634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.375653 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.478870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.478916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.478930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.478951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.478964 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.581935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.582013 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.582034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.582064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.582083 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.685734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.685828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.685852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.685889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.685917 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.702236 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:20 crc kubenswrapper[4811]: E0216 20:57:20.702430 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.702231 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:20 crc kubenswrapper[4811]: E0216 20:57:20.702767 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.707944 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:33:43.137298396 +0000 UTC Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.790108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.790238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.790263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.790298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.790326 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.894731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.894810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.894835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.894872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.894898 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.998172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.998553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.998644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.998743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4811]: I0216 20:57:20.998821 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.102849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.103439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.103656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.103861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.104066 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.206862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.207296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.207440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.207592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.207735 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.310906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.311432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.311679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.311926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.312110 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.415223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.415569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.415647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.415727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.415831 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.519024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.519270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.519338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.519412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.519485 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.622871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.623002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.623039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.623090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.623120 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.701889 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.701954 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:21 crc kubenswrapper[4811]: E0216 20:57:21.702110 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:21 crc kubenswrapper[4811]: E0216 20:57:21.702344 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.708460 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:17:56.623886935 +0000 UTC Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.726188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.726330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.726360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.726396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.726421 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.829543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.829602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.829623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.829653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.829674 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.933451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.933744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.933860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.933930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4811]: I0216 20:57:21.934003 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.038132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.038180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.038238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.038262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.038277 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.141180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.141278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.141295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.141319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.141334 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.244265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.244376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.244396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.244431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.244449 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.355863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.355970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.355999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.356033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.356056 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.459910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.459979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.459996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.460026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.460048 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.563133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.563239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.563306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.563333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.563352 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.666945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.667016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.667037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.667065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.667085 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.702010 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:22 crc kubenswrapper[4811]: E0216 20:57:22.702266 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.702600 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:22 crc kubenswrapper[4811]: E0216 20:57:22.702844 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.708679 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:45:19.415552066 +0000 UTC Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.728345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.749951 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.770408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.770474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.770500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.770551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.770583 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.774603 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.800007 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.828538 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.859563 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.874751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.874865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.874896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.874969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.874996 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.877881 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.894347 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.916600 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.942310 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.979605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.979690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.979716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.979758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.979786 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4811]: I0216 20:57:22.979664 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763f6f468893cda9dd0d5f2cee2e58567b16a8365e139f78638a16637a0c84f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:56:57.263516 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263526 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx\\\\nI0216 20:56:57.263542 6267 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-fh2mx in node crc\\\\nI0216 20:56:57.263555 6267 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mgctp\\\\nI0216 20:56:57.263560 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-fh2mx after 0 failed attempt(s)\\\\nF0216 20:56:57.263564 6267 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.000957 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.020696 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.042157 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.064994 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.082365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.082444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.082459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.082483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.082497 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.101988 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.125682 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.150554 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.186338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.186423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.186448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.186483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.186505 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.289904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.290379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.290555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.290723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.290852 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.394188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.394258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.394275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.394295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.394310 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.498437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.498512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.498534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.498561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.498582 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.601856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.601904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.601915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.601935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.602142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.701891 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.701964 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:23 crc kubenswrapper[4811]: E0216 20:57:23.702082 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:23 crc kubenswrapper[4811]: E0216 20:57:23.702248 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.704515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.704581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.704600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.704628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.704648 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.709660 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:17:25.527761952 +0000 UTC Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.808335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.808414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.808432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.808458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.808477 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.912743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.912817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.912839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.912867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4811]: I0216 20:57:23.912886 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.017038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.017128 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.017152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.017187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.017260 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.120991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.121070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.121091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.121122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.121142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.225438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.225509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.225533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.225562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.225586 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.330166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.330270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.330296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.330332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.330359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.433796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.433845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.433861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.433882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.433894 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.536992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.537034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.537046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.537065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.537078 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.640239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.640316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.640341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.640376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.640408 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.702799 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.702918 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:24 crc kubenswrapper[4811]: E0216 20:57:24.703064 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:24 crc kubenswrapper[4811]: E0216 20:57:24.703234 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.709798 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:36:53.509080152 +0000 UTC Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.710005 4811 scope.go:117] "RemoveContainer" containerID="3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be" Feb 16 20:57:24 crc kubenswrapper[4811]: E0216 20:57:24.710434 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.727926 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.745150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.745185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.745209 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.745229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.745241 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.749381 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.765182 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.781312 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.799472 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.824813 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.848293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.848345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.848358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.848380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.848397 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.849053 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.865006 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.882248 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.896412 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.925814 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.938247 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.951419 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.951814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.951935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.952005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.952112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.952181 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.966825 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.981411 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:24 crc kubenswrapper[4811]: I0216 20:57:24.997788 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:24Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.013332 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:25Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.031542 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:25Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.054696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.054759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.054776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.054798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.054813 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.157809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.157878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.157902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.157934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.157958 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.261728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.261783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.261801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.261830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.261849 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.370306 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.371850 4811 scope.go:117] "RemoveContainer" containerID="3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be" Feb 16 20:57:25 crc kubenswrapper[4811]: E0216 20:57:25.372347 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.373275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.373322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.373333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.373354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.373365 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.475826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.476337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.476531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.476710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.476856 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.580300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.580399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.580437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.580478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.580498 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.683780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.683893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.683913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.683943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.683964 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.702093 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.702279 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:25 crc kubenswrapper[4811]: E0216 20:57:25.702284 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:25 crc kubenswrapper[4811]: E0216 20:57:25.702519 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.710599 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:35:15.420209579 +0000 UTC Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.787300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.787671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.787876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.788096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.788344 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.892374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.892455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.892476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.892505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.892525 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.995043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.995104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.995121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.995148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4811]: I0216 20:57:25.995168 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.097605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.097674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.097685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.097705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.097716 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.201087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.201142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.201154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.201178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.201194 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.305006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.305621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.305723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.305831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.306137 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.409499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.409562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.409577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.409600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.409614 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.514947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.515020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.515035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.515060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.515078 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.617563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.617621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.617639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.617663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.617679 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.702519 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:26 crc kubenswrapper[4811]: E0216 20:57:26.702764 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.703042 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:26 crc kubenswrapper[4811]: E0216 20:57:26.703338 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.710730 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:10:01.464126759 +0000 UTC Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.720174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.720251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.720261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.720278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.720289 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.823009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.823059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.823073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.823096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.823113 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.925604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.925665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.925675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.925693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4811]: I0216 20:57:26.925704 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.028239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.028602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.028689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.028776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.028845 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.130708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.131029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.131108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.131180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.131280 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.234427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.234465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.234474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.234489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.234501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.336731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.336778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.336789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.336828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.336843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.439450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.439515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.439526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.439542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.439551 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.542464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.542538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.542555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.542582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.542610 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.645542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.645601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.645616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.645633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.645646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.702102 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.702175 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:27 crc kubenswrapper[4811]: E0216 20:57:27.702262 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:27 crc kubenswrapper[4811]: E0216 20:57:27.702454 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.710894 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:52:33.247654441 +0000 UTC Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.747909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.747963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.747982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.748007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.748024 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.850942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.850999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.851022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.851045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.851062 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.954010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.954052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.954062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.954078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4811]: I0216 20:57:27.954090 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.056328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.056379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.056389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.056405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.056424 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.159987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.160035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.160047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.160079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.160095 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.262769 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.262835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.262852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.262879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.262898 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.369616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.369664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.369686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.369704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.369719 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.472543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.472600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.472612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.472632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.472644 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.575046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.575095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.575134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.575152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.575162 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.678152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.678242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.678261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.678288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.678306 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.702664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.702724 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:28 crc kubenswrapper[4811]: E0216 20:57:28.702798 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:28 crc kubenswrapper[4811]: E0216 20:57:28.702884 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.711981 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:33:05.059040227 +0000 UTC Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.782332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.782392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.782416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.782440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.782456 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.884838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.884922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.884941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.885157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.885178 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.989088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.989134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.989144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.989161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4811]: I0216 20:57:28.989174 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.114598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.114672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.114693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.114726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.114750 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.217782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.217879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.217905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.217933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.217951 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.241708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.241774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.241796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.241824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.241842 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.255805 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:29Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.261582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.261655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.261676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.261708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.261729 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.276343 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:29Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.283381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.283650 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.283734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.283849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.284024 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.307727 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:29Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.313495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.313551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.313572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.313598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.313620 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.331034 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:29Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.337811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.337875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.337905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.337941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.337968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.359602 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:29Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.359846 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.361907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.361957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.361968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.361986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.361995 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.465138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.465188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.465216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.465236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.465246 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.570994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.571054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.571068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.571092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.571108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.673687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.673729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.673741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.673758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.673771 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.702393 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.702468 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.702558 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.702708 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.712946 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:21:57.020870277 +0000 UTC Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.776554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.776609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.776626 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.776671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.776686 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.879091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.879124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.879134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.879153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.879161 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.941017 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.941161 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:29 crc kubenswrapper[4811]: E0216 20:57:29.941235 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:01.941219741 +0000 UTC m=+99.870515679 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.981436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.981495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.981510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.981531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4811]: I0216 20:57:29.981545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.085316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.085367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.085376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.085395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.085405 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.187696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.187749 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.187761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.187779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.187790 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.290347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.290421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.290434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.290454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.290484 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.392755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.392798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.392810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.392825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.392837 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.495186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.495252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.495263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.495282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.495295 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.598698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.599174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.599368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.599509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.599667 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:30 crc kubenswrapper[4811]: E0216 20:57:30.702336 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702591 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.702872 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:30 crc kubenswrapper[4811]: E0216 20:57:30.703081 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.713437 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:10:45.485479022 +0000 UTC Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.806110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.806502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.806705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.806847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.807035 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.910649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.910789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.910810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.910837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4811]: I0216 20:57:30.910893 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.015776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.015825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.015835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.015853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.015863 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.119646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.119717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.119735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.119765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.119790 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.223635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.223696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.223715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.223743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.223764 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.327636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.327698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.327716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.327741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.327759 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.431263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.431350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.431389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.431419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.431433 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.534333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.534390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.534409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.534436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.534456 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.637095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.637784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.637922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.638077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.638279 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.702276 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:31 crc kubenswrapper[4811]: E0216 20:57:31.702437 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.702763 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:31 crc kubenswrapper[4811]: E0216 20:57:31.703102 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.713977 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:29:45.349949895 +0000 UTC Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.741520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.741554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.741564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.741581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.741592 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.844000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.844050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.844065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.844085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.844098 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.946221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.946267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.946278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.946291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4811]: I0216 20:57:31.946301 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.048569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.048616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.048629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.048646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.048659 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.150950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.150978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.150986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.151000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.151009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.253833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.253896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.253920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.253952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.253972 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.357083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.357154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.357173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.357232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.357251 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.459664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.459705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.459715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.459729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.459742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.562073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.562109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.562122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.562138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.562149 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.664800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.664852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.664866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.664885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.664899 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.702115 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.702186 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:32 crc kubenswrapper[4811]: E0216 20:57:32.702309 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:32 crc kubenswrapper[4811]: E0216 20:57:32.702362 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.714324 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:32:06.026071874 +0000 UTC Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.721564 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.734902 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.746136 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.759587 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.768601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.768975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.768993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.769014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.769028 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.773600 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.800365 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.812756 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.824319 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.838245 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.857489 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.871396 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.871654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.871694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.871704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.871719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.871730 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.887097 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.901565 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.918996 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.933492 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.945512 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.957906 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.975075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.975137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.975154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.975175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.975188 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4811]: I0216 20:57:32.975425 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:32Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.078428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.078479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.078495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.078513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.078526 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.181946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.181992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.182006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.182025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.182038 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.285167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.285303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.285324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.285360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.285381 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.388766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.388837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.388854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.388903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.388917 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.491544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.491629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.491648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.491678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.491696 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.595097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.595179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.595255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.595299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.595325 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.698092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.698153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.698168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.698212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.698233 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.702642 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.702674 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:33 crc kubenswrapper[4811]: E0216 20:57:33.702797 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:33 crc kubenswrapper[4811]: E0216 20:57:33.702968 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.714769 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:14:03.02309725 +0000 UTC Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.801364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.801411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.801421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.801437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.801448 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.904984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.905055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.905073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.905105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4811]: I0216 20:57:33.905125 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.008673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.008796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.008825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.008853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.008876 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.112856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.112910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.112920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.112942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.112959 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.189081 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/0.log" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.189150 4811 generic.go:334] "Generic (PLEG): container finished" podID="a946fefd-e014-48b1-995b-ef221a88bc73" containerID="9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b" exitCode=1 Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.189212 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerDied","Data":"9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.197372 4811 scope.go:117] "RemoveContainer" containerID="9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.216149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.216220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.216235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.216254 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.216265 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.219155 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.235423 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.252143 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.264930 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.278591 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.290532 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.307606 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.318610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.318660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.318673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.318695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.318709 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.323379 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.341891 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.358763 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.378017 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.390916 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.404503 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.420785 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.422151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.422241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.422258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.422280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.422320 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.440486 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.453394 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.468441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.489688 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:34Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.526245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.526299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.526322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.526376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.526397 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.629304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.629364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.629382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.629409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.629423 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.702184 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:34 crc kubenswrapper[4811]: E0216 20:57:34.702345 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.702425 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:34 crc kubenswrapper[4811]: E0216 20:57:34.702753 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.715296 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:42:53.383192805 +0000 UTC Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.732382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.732415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.732424 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.732441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.732451 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.835722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.835774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.835788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.835812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.835826 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.938767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.938832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.938846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.938874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4811]: I0216 20:57:34.938889 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.042042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.042106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.042125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.042158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.042178 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.145790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.145860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.145873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.145902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.145930 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.198416 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/0.log" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.198499 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerStarted","Data":"276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.221064 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.249097 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.249368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.249416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.249435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.249459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.249476 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.265103 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.279355 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.293979 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.325224 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.338876 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.352042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.352090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.352111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.352136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.352153 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.358245 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.370901 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.387969 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.406494 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.426325 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.446729 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.455972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.456058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.456084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.456121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.456153 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.471348 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.490171 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.507230 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.524357 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.545949 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.559281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.559344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.559357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.559383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.559398 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.662683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.662738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.662757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.662780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.662798 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.702458 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.702517 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:35 crc kubenswrapper[4811]: E0216 20:57:35.702663 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:35 crc kubenswrapper[4811]: E0216 20:57:35.702770 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.715657 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:16:58.240141927 +0000 UTC Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.766529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.766583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.766592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.766609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.766619 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.870293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.870334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.870348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.870366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.870380 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.973830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.973928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.973956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.974001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4811]: I0216 20:57:35.974028 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.076300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.076376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.076407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.076443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.076472 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.178789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.178849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.178862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.178880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.178892 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.283443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.283491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.283501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.283520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.283531 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.386409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.386464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.386475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.386495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.386506 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.489706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.489754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.489767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.489785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.489797 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.592189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.592296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.592316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.592344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.592365 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.695848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.695887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.695895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.695911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.695921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.702390 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.702464 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:36 crc kubenswrapper[4811]: E0216 20:57:36.702554 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:36 crc kubenswrapper[4811]: E0216 20:57:36.702618 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.715855 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:23:34.472756039 +0000 UTC Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.798057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.798517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.798701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.798903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.799100 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.901836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.901930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.901939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.901957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4811]: I0216 20:57:36.901968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.005112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.005273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.005300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.005332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.005353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.108944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.109032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.109049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.109075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.109089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.211907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.211995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.212044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.212072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.212113 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.314396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.314466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.314487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.314520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.314541 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.417159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.417470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.417500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.417537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.417561 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.521096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.521167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.521189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.521243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.521275 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.624262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.624325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.624344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.624371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.624410 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.702643 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.702728 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:37 crc kubenswrapper[4811]: E0216 20:57:37.702896 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:37 crc kubenswrapper[4811]: E0216 20:57:37.703102 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.717158 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:09:39.342174666 +0000 UTC Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.727328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.727457 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.727478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.727503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.727524 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.831178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.831301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.831320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.831353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.831400 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.934406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.934482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.934507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.934538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4811]: I0216 20:57:37.934566 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.037972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.038039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.038057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.038084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.038104 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.140752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.140841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.140859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.140898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.140926 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.244501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.244563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.244586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.244638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.244674 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.347981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.348046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.348064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.348092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.348116 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.451285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.451340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.451548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.451575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.451593 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.554751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.554872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.554898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.554934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.554966 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.658277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.658349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.658369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.658399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.658420 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.702875 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.703018 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:38 crc kubenswrapper[4811]: E0216 20:57:38.703164 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:38 crc kubenswrapper[4811]: E0216 20:57:38.703364 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.717311 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:43:49.681916923 +0000 UTC Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.760777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.760838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.760848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.760863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.760872 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.863375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.863629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.863642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.863673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.863687 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.971750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.971801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.971809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.971824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4811]: I0216 20:57:38.971833 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.074735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.074807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.074831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.074855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.074899 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.178272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.178325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.178343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.178372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.178393 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.281867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.281943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.281964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.281995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.282016 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.385935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.386014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.386033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.386064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.386091 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.489765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.489827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.489843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.489869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.489889 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.593637 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.593726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.593747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.593776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.593796 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.697552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.697627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.697650 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.697688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.697713 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.699353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.699430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.699454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.699483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.699506 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.702320 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.702322 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.702503 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.702672 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.718522 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:30:24.58270524 +0000 UTC Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.724610 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:39Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.731406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.731491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.731515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.731586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.731614 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.757311 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:39Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.763251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.763339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.763369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.763403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.763424 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.787947 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:39Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.794067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.794245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.794274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.794304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.794326 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.816698 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:39Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.823816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.823928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.823956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.823988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.824009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.846952 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:39Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:39 crc kubenswrapper[4811]: E0216 20:57:39.847315 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.850616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.850706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.850728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.850765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.850791 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.954610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.954687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.954704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.954734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4811]: I0216 20:57:39.954756 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.059625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.059733 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.059760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.059807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.060501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.164248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.164341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.164363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.164395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.164482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.268728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.268789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.268809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.268840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.268863 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.372528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.372611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.372636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.372693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.372720 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.477407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.477510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.477533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.477558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.477613 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.581307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.581355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.581364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.581378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.581388 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.684928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.684974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.684983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.685000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.685015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.702534 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:40 crc kubenswrapper[4811]: E0216 20:57:40.702663 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.703131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:40 crc kubenswrapper[4811]: E0216 20:57:40.703238 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.703836 4811 scope.go:117] "RemoveContainer" containerID="3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.719492 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:23:48.205079788 +0000 UTC Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.789797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.790285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.790305 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.790335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.790362 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.894124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.894185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.894243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.894279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.894302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.996918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.996952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.996961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.996977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4811]: I0216 20:57:40.996988 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.100125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.100181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.100245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.100290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.100325 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.203733 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.203786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.203805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.203827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.203842 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.222180 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/2.log" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.225164 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.225916 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.247111 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.264921 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.282783 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.297630 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.306503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.306590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.306612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.306643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.306665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.316188 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.332407 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.357628 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.373901 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.390170 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.409481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.409539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.409552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.409574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.409588 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.413649 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.448636 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.469402 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.488397 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.501861 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.512642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.512687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.512702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.512725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.512742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.519581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.538759 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.557164 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.574560 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.615806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.616143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.616281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.616374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.616484 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.702798 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.702851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:41 crc kubenswrapper[4811]: E0216 20:57:41.702950 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:41 crc kubenswrapper[4811]: E0216 20:57:41.703055 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.718767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.718848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.718864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.718883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.718895 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.719927 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:45:29.514241566 +0000 UTC Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.821607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.821643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.821653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.821669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.821679 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.924150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.924242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.924269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.924298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4811]: I0216 20:57:41.924320 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.027541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.027620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.027646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.027677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.027698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.130538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.130590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.130603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.130623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.130640 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.231744 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/3.log" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.232742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.232777 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/2.log" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.232815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.232844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.232878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.232903 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.238294 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" exitCode=1 Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.238351 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.238399 4811 scope.go:117] "RemoveContainer" containerID="3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.239570 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 20:57:42 crc kubenswrapper[4811]: E0216 20:57:42.239817 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.267502 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.283142 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.300926 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.318764 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.334962 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.336451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.336505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.336529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.336547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.336557 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.350915 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.365917 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.403027 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.430470 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.438741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.438783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.438795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.438814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.438828 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.449219 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659532 6864 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659218 6864 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr after 0 failed attempt(s)\\\\nF0216 20:57:41.659583 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.459708 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.470475 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.481224 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.502109 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.516358 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.529729 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.539235 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.540666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.540700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.540714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.540733 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.540745 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.551015 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.643895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.643936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.643946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.643961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.643971 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.702583 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:42 crc kubenswrapper[4811]: E0216 20:57:42.702703 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.703062 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:42 crc kubenswrapper[4811]: E0216 20:57:42.703293 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.720893 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:57:17.969445791 +0000 UTC Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.722494 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.734080 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.747073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.747133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.747142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.747161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.747171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.747183 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.762257 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.775777 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.790721 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.808716 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f065505479c1413b7c53986fb30e2494c3ce0a67232606f938abe117abff4be\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:11Z\\\",\\\"message\\\":\\\"788 6458 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:57:11.920958 6458 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.921134 6458 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.921642 6458 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:57:11.922062 6458 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 20:57:11.922107 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:57:11.922146 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:57:11.922167 6458 factory.go:656] Stopping watch factory\\\\nI0216 20:57:11.922215 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:57:11.922230 6458 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:57:11.922506 6458 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659532 6864 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659218 6864 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr after 0 failed attempt(s)\\\\nF0216 20:57:41.659583 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.818778 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.830865 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.845291 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.850303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.850344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.850360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.850378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.850391 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.866048 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.884729 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.902056 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.916711 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.931160 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.944071 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.952854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.952885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.952897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.952915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.952926 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.956589 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:42 crc kubenswrapper[4811]: I0216 20:57:42.971381 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:42Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.056522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.056582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.056601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.056632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.056657 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.159091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.159155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.159173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.159228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.159252 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.243509 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/3.log" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.247617 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 20:57:43 crc kubenswrapper[4811]: E0216 20:57:43.247811 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.261844 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.263948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.264006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.264026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.264049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.264073 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.275791 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.287867 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.321548 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.336932 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.350966 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.366700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.366754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.366768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.366787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.366803 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.369273 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.380989 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.400415 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.418833 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.430000 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.441021 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.450261 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.459285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.469172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.469245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.469259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.469276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.469288 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.472348 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.485951 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.501881 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659532 6864 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659218 6864 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr after 0 failed attempt(s)\\\\nF0216 20:57:41.659583 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.510699 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.571900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.571951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.571966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.571987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.572001 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.675774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.675848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.675871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.675900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.675922 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.702651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.702650 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:43 crc kubenswrapper[4811]: E0216 20:57:43.702863 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:43 crc kubenswrapper[4811]: E0216 20:57:43.703014 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.722058 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:20:15.506291913 +0000 UTC Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.779391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.779434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.779449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.779471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.779487 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.882503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.882553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.882565 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.882581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.882594 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.985636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.985670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.985682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.985698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4811]: I0216 20:57:43.985711 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.089079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.089159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.089172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.089214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.089228 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.191860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.191905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.191916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.191934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.191949 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.294929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.294983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.295002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.295027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.295044 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.398864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.398936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.398962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.398993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.399014 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.502692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.502768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.502785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.502811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.502830 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.606164 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.606269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.606295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.606328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.606350 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.702332 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.702343 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:44 crc kubenswrapper[4811]: E0216 20:57:44.702520 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:44 crc kubenswrapper[4811]: E0216 20:57:44.702750 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.709648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.709697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.709715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.709739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.709758 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.722334 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:59:32.321448679 +0000 UTC Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.812556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.812597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.812607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.812624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.812634 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.915876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.915916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.915929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.915944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4811]: I0216 20:57:44.915954 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.018747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.018794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.018804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.018821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.018831 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.122068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.122108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.122116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.122132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.122142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.224922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.224978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.224988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.225003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.225012 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.327328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.327369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.327381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.327397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.327407 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.430072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.430129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.430140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.430153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.430163 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.532508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.532585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.532609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.532639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.532660 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.611679 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.611857 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.611912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.611974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612137 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612165 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612180 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612331 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.612314329 +0000 UTC m=+147.541610277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612378 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612426 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.612416141 +0000 UTC m=+147.541712079 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612504 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612607 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.612565545 +0000 UTC m=+147.541861523 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.612660 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.612635967 +0000 UTC m=+147.541932195 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.635920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.635980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.635996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.636016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.636033 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.702664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.702664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.702972 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.703071 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.713330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.713498 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.713519 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.713531 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:45 crc kubenswrapper[4811]: E0216 20:57:45.713580 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.713566884 +0000 UTC m=+147.642862822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.723259 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:06:46.795036285 +0000 UTC Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.741263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.741322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.741341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.741367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.741385 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.844533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.844613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.844638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.844670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.844695 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.948273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.948332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.948356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.948386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4811]: I0216 20:57:45.948408 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.051997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.052042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.052057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.052082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.052099 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.155871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.155940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.155956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.155982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.156001 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.261657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.261711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.261729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.261753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.261773 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.365874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.365941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.365959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.365987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.366008 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.469901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.469971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.469991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.470017 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.470041 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.573599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.573662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.573686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.573712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.573728 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.677406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.677498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.677518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.677546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.677566 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.702175 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.702360 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:46 crc kubenswrapper[4811]: E0216 20:57:46.702524 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:46 crc kubenswrapper[4811]: E0216 20:57:46.702810 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.723757 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:23:08.871011729 +0000 UTC Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.781082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.781137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.781156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.781180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.781235 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.884109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.884163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.884180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.884249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.884275 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.987534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.987600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.987634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.987667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4811]: I0216 20:57:46.987688 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.090693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.090731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.090746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.090763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.090775 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.194184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.194287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.194306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.194331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.194350 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.297190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.297310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.297330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.297355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.297374 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.401336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.401489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.401505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.401606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.401641 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.504758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.505365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.505399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.505431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.505454 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.607747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.607808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.607873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.607895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.607907 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.702248 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.702252 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:47 crc kubenswrapper[4811]: E0216 20:57:47.702442 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:47 crc kubenswrapper[4811]: E0216 20:57:47.702554 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.710945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.711015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.711033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.711059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.711077 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.724359 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:04:56.176405603 +0000 UTC Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.814155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.814260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.814285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.814314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.814336 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.917865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.917919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.917936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.917961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4811]: I0216 20:57:47.917980 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.020390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.020420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.020433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.020449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.020461 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.123645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.123701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.123713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.123732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.123748 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.227292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.227354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.227389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.227418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.227442 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.330282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.330351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.330374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.330406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.330429 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.433729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.433779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.433801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.433827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.433847 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.537361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.537435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.537460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.537489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.537510 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.640360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.640449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.640713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.640813 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.640843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.702055 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.702281 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:48 crc kubenswrapper[4811]: E0216 20:57:48.702577 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:48 crc kubenswrapper[4811]: E0216 20:57:48.702817 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.718284 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.725441 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:56:41.785720821 +0000 UTC Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.744529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.744587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.744604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.744629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.744646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.847464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.847513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.847523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.847539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.847549 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.950333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.950399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.950421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.950453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4811]: I0216 20:57:48.950476 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.054985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.055076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.055096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.055125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.055152 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.158685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.158755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.158774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.158798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.158817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.262221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.262269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.262279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.262298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.262310 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.366356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.366445 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.366483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.366513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.366534 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.470001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.470056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.470073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.470099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.470116 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.573022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.573085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.573102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.573126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.573146 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.675910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.675978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.675996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.676022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.676041 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.702174 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.702282 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:49 crc kubenswrapper[4811]: E0216 20:57:49.702332 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:49 crc kubenswrapper[4811]: E0216 20:57:49.702450 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.725656 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:03:46.086544653 +0000 UTC Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.778925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.778984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.778997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.779021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.779035 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.882834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.882931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.882960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.883046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.883080 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.986308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.986366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.986380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.986407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4811]: I0216 20:57:49.986425 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.089809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.089877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.089890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.089911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.089926 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.107799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.107842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.107856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.107872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.107885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.128734 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.133845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.133910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.133926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.133948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.133964 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.151751 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.156453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.156507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.156519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.156541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.156556 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.173787 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.179012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.179094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.179179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.179644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.179684 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.201867 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.207560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.207605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.207616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.207635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.207649 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.229486 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.229745 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.232290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.232352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.232374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.232403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.232423 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.335894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.335967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.335991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.336028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.336053 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.440322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.440399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.440486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.440527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.440551 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.544552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.544636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.544650 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.544672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.544686 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.647663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.647743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.647757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.647781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.647799 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.702566 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.702701 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.702734 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:50 crc kubenswrapper[4811]: E0216 20:57:50.702912 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.726733 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:37:11.413240428 +0000 UTC Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.750656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.750737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.750757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.750783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.750803 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.854717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.854774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.854792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.854817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.854836 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.958754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.958812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.958832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.958861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4811]: I0216 20:57:50.958880 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.062473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.062512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.062524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.062541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.062552 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.165125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.165170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.165180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.165336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.165380 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.267948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.267988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.267999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.268014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.268024 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.371112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.371258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.371287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.371319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.371344 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.474454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.474524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.474545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.474578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.474602 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.579501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.579674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.579705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.579741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.579821 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.683291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.683349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.683362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.683385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.683405 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.702725 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.702802 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:51 crc kubenswrapper[4811]: E0216 20:57:51.702996 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:51 crc kubenswrapper[4811]: E0216 20:57:51.703144 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.727399 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:09:37.763509063 +0000 UTC Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.787361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.787446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.787469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.787506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.787533 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.891056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.891152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.891187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.891267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.891294 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.994248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.994332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.994357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.994393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4811]: I0216 20:57:51.994421 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.098042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.098125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.098147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.098176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.098238 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.201422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.201482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.201496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.201519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.201537 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.305630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.305685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.305700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.305724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.305742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.408978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.409033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.409050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.409071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.409084 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.512072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.512184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.512243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.512271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.512302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.615331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.615377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.615389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.615407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.615420 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.702889 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:52 crc kubenswrapper[4811]: E0216 20:57:52.703504 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.702878 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:52 crc kubenswrapper[4811]: E0216 20:57:52.704530 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.719521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.719575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.719598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.719622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.719638 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.722184 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7882d54c-73b3-4b59-b98c-dafea45a2600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b94a3750a50b4ec77d812e54702f5419af37a45dc21a30eaf918dbe789da0651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c0f57d191bf3e315467166fa2ad14c9add128291cc79cdd05c0c2f40c9f167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c0f57d191bf3e315467166fa2ad14c9add128291cc79cdd05c0c2f40c9f167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.727897 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:51:04.228346697 +0000 UTC Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.739855 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.758792 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.776893 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.799792 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.820780 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.822672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.822725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.822745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.822772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.822794 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.853695 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.876318 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.900116 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.922069 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.930862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.930934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.930955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.930985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.931007 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.948751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659532 6864 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659218 6864 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr after 0 failed attempt(s)\\\\nF0216 20:57:41.659583 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.961404 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.973673 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:52 crc kubenswrapper[4811]: I0216 20:57:52.990360 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.015714 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.033782 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.034847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.034908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.034920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.034945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.034959 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.051529 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.068262 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.083966 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.138924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.139007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.139030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.139061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.139083 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.243362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.243463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.243490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.243528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.243563 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.347149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.347233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.347247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.347274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.347291 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.451158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.451293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.451315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.451342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.451361 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.554954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.555038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.555056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.555085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.555105 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.658543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.658611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.658637 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.658675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.658699 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.702657 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.702683 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:53 crc kubenswrapper[4811]: E0216 20:57:53.702937 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:53 crc kubenswrapper[4811]: E0216 20:57:53.703080 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.729035 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:34:37.470240311 +0000 UTC Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.761907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.761979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.761997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.762030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.762051 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.866080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.866164 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.866181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.866244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.866264 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.970190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.970326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.970351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.970392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4811]: I0216 20:57:53.970417 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.077845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.078386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.078586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.078849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.079286 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.182383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.182454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.182469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.182497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.182518 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.286351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.286421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.286439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.286465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.286483 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.390613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.390685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.390703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.390729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.390750 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.494261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.494326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.494346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.494375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.494393 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.596953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.597035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.597054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.597159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.597181 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.700885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.700952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.700970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.701000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.701019 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.701922 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.701936 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:54 crc kubenswrapper[4811]: E0216 20:57:54.702109 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:54 crc kubenswrapper[4811]: E0216 20:57:54.702276 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.729834 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:05:22.276235481 +0000 UTC Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.803984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.804036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.804186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.804253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.804280 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.908089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.908160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.908180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.908324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4811]: I0216 20:57:54.908360 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.011276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.011344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.011366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.011393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.011411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.115487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.115584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.115609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.115646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.115665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.219338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.219429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.219452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.219486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.219508 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.322065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.322133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.322154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.322177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.322225 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.425110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.425191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.425264 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.425298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.425323 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.529437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.529533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.529561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.529599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.529626 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.632895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.632965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.632982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.633010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.633029 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.702290 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.702353 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:55 crc kubenswrapper[4811]: E0216 20:57:55.702512 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:55 crc kubenswrapper[4811]: E0216 20:57:55.702701 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.730905 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:45:22.17061605 +0000 UTC Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.736934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.736998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.737024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.737055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.737077 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.841016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.841091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.841111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.841145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.841163 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.944137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.944252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.944293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.944325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4811]: I0216 20:57:55.944348 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.047323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.047395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.047414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.047440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.047460 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.151314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.151384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.151403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.151435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.151456 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.255175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.255320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.255349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.255390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.255415 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.358820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.358896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.358913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.358955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.358976 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.462561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.462639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.462664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.462700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.462726 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.566937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.567021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.567040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.567068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.567089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.670415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.670488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.670513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.670550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.670576 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.702922 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:56 crc kubenswrapper[4811]: E0216 20:57:56.703450 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.703846 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:56 crc kubenswrapper[4811]: E0216 20:57:56.703999 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.732052 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:11:36.611652611 +0000 UTC Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.774890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.774967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.774983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.775005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.775019 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.878489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.878937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.879085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.879555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.879719 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.982544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.982599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.982622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.982654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4811]: I0216 20:57:56.982671 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.086118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.086189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.086241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.086277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.086302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.189044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.189110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.189137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.189165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.189185 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.291511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.291600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.291624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.291654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.291677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.394846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.394908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.394928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.394956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.394975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.498603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.498685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.498710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.498740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.498764 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.601881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.601942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.601965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.601995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.602014 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.701912 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.701986 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:57 crc kubenswrapper[4811]: E0216 20:57:57.702114 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:57 crc kubenswrapper[4811]: E0216 20:57:57.702314 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.704296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.704350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.704370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.704391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.704408 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.732410 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:23:35.323321502 +0000 UTC Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.806898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.807002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.807022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.807053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.807079 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.909656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.909727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.909746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.909773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4811]: I0216 20:57:57.909790 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.013050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.013108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.013116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.013181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.013211 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.116399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.116441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.116453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.116472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.116482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.219718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.219802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.219822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.219851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.219873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.322458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.322546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.322563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.322596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.322616 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.426149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.426217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.426228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.426250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.426265 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.529764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.529830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.529849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.529876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.529899 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.633428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.633514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.633532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.633565 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.633586 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.703057 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:58 crc kubenswrapper[4811]: E0216 20:57:58.703413 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.703448 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:58 crc kubenswrapper[4811]: E0216 20:57:58.704329 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.704919 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 20:57:58 crc kubenswrapper[4811]: E0216 20:57:58.705402 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.732675 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:46:55.097321974 +0000 UTC Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.736143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.736204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.736215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.736233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.736247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.840030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.840109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.840129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.840161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.840180 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.944288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.944375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.944399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.944432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4811]: I0216 20:57:58.944451 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.048806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.048890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.048908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.048944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.048966 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.152543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.152602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.152621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.152648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.152667 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.256958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.257044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.257070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.257103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.257123 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.361043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.361118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.361142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.361172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.361190 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.465565 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.465635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.465654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.465682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.465701 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.568609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.568672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.568685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.568709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.568726 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.673300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.673374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.673397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.673430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.673453 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.702181 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.702324 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:59 crc kubenswrapper[4811]: E0216 20:57:59.702435 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:57:59 crc kubenswrapper[4811]: E0216 20:57:59.702577 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.732854 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:16:27.134276025 +0000 UTC Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.777246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.777319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.777337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.777365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.777384 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.882820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.882897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.882922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.882953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.882976 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.986876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.986966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.986987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.987016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4811]: I0216 20:57:59.987036 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.090742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.090820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.090840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.090870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.090898 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.194392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.194474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.194496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.194528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.194548 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.297813 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.297897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.297912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.297935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.297950 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.401476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.401533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.401547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.401569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.401582 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.487776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.487871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.487897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.487924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.487945 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.509710 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.515803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.515864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.515891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.515925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.515949 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.531937 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.537141 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.537234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.537248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.537272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.537286 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.554926 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.561841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.561931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.561945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.561972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.561989 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.587633 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.593248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.593330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.593351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.593381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.593399 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.613918 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f61b05-d276-4909-a6aa-85b13eb068a7\\\",\\\"systemUUID\\\":\\\"529dfd1c-acac-4f44-8431-0dae7052f19c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.614142 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.616760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.616816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.616838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.616863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.616879 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.702607 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.702851 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.703246 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:00 crc kubenswrapper[4811]: E0216 20:58:00.703517 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.719480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.719548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.719572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.719602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.719626 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.733847 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:20:03.249818325 +0000 UTC Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.823910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.824425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.824635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.824846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.825065 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.928314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.928394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.928419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.928450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4811]: I0216 20:58:00.928475 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.032633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.032700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.032719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.032745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.032763 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.136310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.136396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.136415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.136444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.136462 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.240253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.240306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.240318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.240340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.240353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.343899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.344079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.344106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.344190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.344244 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.448265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.448340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.448360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.448391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.448411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.552635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.552734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.552756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.552785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.552805 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.656977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.657052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.657071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.657098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.657123 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.702551 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.702551 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:01 crc kubenswrapper[4811]: E0216 20:58:01.702757 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:01 crc kubenswrapper[4811]: E0216 20:58:01.702859 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.734859 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:40:33.082701363 +0000 UTC Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.761158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.761259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.761280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.761307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.761327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.864540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.864619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.864644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.864676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.864702 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.968052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.968120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.968138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.968165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4811]: I0216 20:58:01.968182 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.004709 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:02 crc kubenswrapper[4811]: E0216 20:58:02.005091 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:58:02 crc kubenswrapper[4811]: E0216 20:58:02.005249 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs podName:1b4c0a11-23d9-412e-a5d8-120d622bef57 nodeName:}" failed. No retries permitted until 2026-02-16 20:59:06.005181292 +0000 UTC m=+163.934477270 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs") pod "network-metrics-daemon-7nk7k" (UID: "1b4c0a11-23d9-412e-a5d8-120d622bef57") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.071746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.071825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.071844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.071874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.071895 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.176416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.176506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.176526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.176572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.176595 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.287453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.287522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.287541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.287571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.287594 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.391475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.391533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.391551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.391579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.391598 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.494024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.494078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.494100 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.494124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.494147 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.598009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.598073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.598085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.598120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.598138 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.700386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.700457 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.700476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.700504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.700524 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.702803 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.702816 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:02 crc kubenswrapper[4811]: E0216 20:58:02.702989 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:02 crc kubenswrapper[4811]: E0216 20:58:02.703155 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.726698 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://116b2bf9f228b1ea8c324a10a8499f547a9eefaf243e92677d6ad1cdad41fb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.739589 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:49:33.632875423 +0000 UTC Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.746166 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.780751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1bbcd0c-f192-4210-831c-82e87a4768a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659532 6864 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 20:57:41.659218 6864 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr after 0 failed attempt(s)\\\\nF0216 20:57:41.659583 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x2ggt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.800688 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xwj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a462664-d492-4632-bd4d-e1a890961995\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843bd93a94215472362338b9f6cf80f1700251dc476805feb2efcf03148c2c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnsms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xwj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.804158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.804444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.804587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.804731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.804857 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.821842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b4c0a11-23d9-412e-a5d8-120d622bef57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hffc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7nk7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.847287 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3555f53b-f439-4c1b-885e-d0e987a3eacf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25d4bb653feada8d43c9d5c591dc6b998b5832bd3f22e2ec37e5699eccf969d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6ab160c0ebbd5402cb42a47636289d18fa0b45751a6a1efe080086f58f11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1a2735a81b0d6b9261f675ec2907fa8ef100dba30e3a1bc9f906236eb376c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2578ea2fcc01f0f0c16c5f4b8ad8f7806c3ad9d7958463b8cb38e7edbe49684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.880670 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2d1307d-4b4c-40b8-b1bb-5d5313535301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c690d84ba176d779bbc75d2006a23eeed4cf64218b1cc96ebb1525644ceb1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51ce25506a90ce639daa32eb6bcb610f3c0966f2c0cb916e99d2f0bb0890964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9acb784920206987a3db2b88131064e75c4304bec00aaf8d8af0d3c92b51b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://405f538b6e2f31631ad2f39ae32bf79403c20722249cebf04778c7238d33976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7c6c35230b60ceb9ecc50122f257478fffda029f6f1485e1721ae673fd15cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49328134525b8369a74efd3535d4476db05f5c591687740f4c243bb10d98d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86937e198db0410fcc89e5afbaf2087ef9c1379418906e0e79681893d5edd637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644045dde08d66dde278aeaa79b3d87b4ef73c6f51dde2e76b1d08fc0c3a62fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.904877 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f232450-0c54-40cb-b91d-50b2d30a4cb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b63d8aa27be6123ef3f3fff52ef919df4630a9f738a8b8b8b95bc256d5a9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cc91ac81b7f6ce9fbe293939a1484253c7f5aa358d3ee8dfeb86c3baf18c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0763c58e562c81afe4d5826c182a016c4993aa0da5a4f88f606a506349d8bb4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.908232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.908296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.908310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.908334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.908350 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.929307 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.948470 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-55x7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"746baa9e-089b-4907-9809-72705f44cd00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde9116045a1cd8d6377b4cba3ba4586f987086668543e9ea59d312b79a4a1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-55x7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.968035 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa95b3fc-1bfa-44f3-b568-7f325b230c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db45e15274f603c8641652ce964196b10b7d78a19056cf9e1b528e63cb0f9062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fh2mx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4811]: I0216 20:58:02.986925 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7882d54c-73b3-4b59-b98c-dafea45a2600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b94a3750a50b4ec77d812e54702f5419af37a45dc21a30eaf918dbe789da0651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c0f57d191bf3e315467166fa2ad14c9add128291cc79cdd05c0c2f40c9f167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c0f57d191bf3e315467166fa2ad14c9add128291cc79cdd05c0c2f40c9f167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.008757 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68aa9000d5f81a690e2846d823d631e448274329c41686f3ec8ab85076790409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4d2040891775779b61e39d2194d2d0ee613e043aa46b64559cd084f84b2b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.011602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.011696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.011721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.011750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.011772 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.023684 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51c7c86ea4d40ac3c7b70059a70382cbb8e3904986bfb3eb4eb5e509990e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.039851 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"479f901f-0d27-49cb-8ce9-861848c4e0b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414a43cbfdf4a40d4397c606cd52588bb0fcd2ae991d0c4a15763c7f15c838e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7873c7b19e084b4229d8febcd49937865b3985f85da91030cb1903e29fa98e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29ea5ac333faed8db6411c9e0c6df9f2dddc04f2508542af3459e238421b32b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1073e260688ba29f33ff43e6eab39d1bd347370fee75369cd0c5bf01b73701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e3d3bebfe361938e0ff5fed8221716f7a0399a67777dad7204baf42e8c395aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca383bbb0a0e80abc685f98615940529b0c88e85f73c371e73b3e4e5c1536c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531af51f9614b7c04adc1b210bd5d6e823e0408ee688ee74cf03b0efae34ba0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcbh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzmxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.064353 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c014a2e2-6a69-47fc-b547-4dc52873a43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:36Z\\\",\\\"message\\\":\\\"W0216 20:56:25.893464 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:25.893830 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275385 cert, and key in /tmp/serving-cert-2790518915/serving-signer.crt, /tmp/serving-cert-2790518915/serving-signer.key\\\\nI0216 20:56:26.093710 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:26.096702 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:26.096843 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:26.099272 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2790518915/tls.crt::/tmp/serving-cert-2790518915/tls.key\\\\\\\"\\\\nF0216 20:56:36.314196 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.086093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.111146 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mgctp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a946fefd-e014-48b1-995b-ef221a88bc73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:34Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63\\\\n2026-02-16T20:56:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c13ebbc-099e-472a-a694-f3e90f379f63 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:49Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76fcv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mgctp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.115095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.115161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.115180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.115241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.115265 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.138611 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91ed3265-a583-4b6c-bb05-52f5b758b44d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b00840bdcc3183dce8bb004f0e2eeb132030cf0895d91bdefa430d0e9593cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12cf9d69d4d523505bd8f6a9183f62a05788b057ac1667956aa5aba063ee5012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2tpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l89mr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.219107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.219183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.219233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.219267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.219288 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.322907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.322967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.322981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.323001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.323016 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.426812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.426884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.426903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.426930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.426951 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.530130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.530237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.530256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.530281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.530300 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.633008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.633060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.633086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.633113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.633133 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.702291 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:03 crc kubenswrapper[4811]: E0216 20:58:03.702491 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.702288 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:03 crc kubenswrapper[4811]: E0216 20:58:03.702885 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.736440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.736513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.736532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.736556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.736576 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.740637 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:53:39.12697957 +0000 UTC Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.840247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.840325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.840348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.840381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.840401 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.943162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.943298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.943327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.943366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4811]: I0216 20:58:03.943394 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.048460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.048588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.048611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.048667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.048687 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.152147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.152256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.152281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.152320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.152341 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.255608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.255700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.255724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.255758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.255783 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.359603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.359689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.359707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.359738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.359761 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.462607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.462660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.462678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.462710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.462731 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.566075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.566148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.566166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.566227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.566249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.669838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.669916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.669936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.669984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.670017 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.702589 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:04 crc kubenswrapper[4811]: E0216 20:58:04.702862 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.703368 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:04 crc kubenswrapper[4811]: E0216 20:58:04.703634 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.741917 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:06:40.083352826 +0000 UTC Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.774157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.774252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.774271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.774295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.774346 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.879117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.879191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.879233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.879261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.879284 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.983021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.983069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.983086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.983111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4811]: I0216 20:58:04.983129 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.086622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.087124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.087413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.087633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.087837 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.191157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.191242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.191257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.191282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.191303 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.295452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.295528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.295545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.295576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.295595 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.399099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.399578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.399750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.399970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.400180 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.503708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.503853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.503882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.503914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.503939 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.606892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.606937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.606947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.606964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.606974 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.702391 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.702391 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:05 crc kubenswrapper[4811]: E0216 20:58:05.702760 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:05 crc kubenswrapper[4811]: E0216 20:58:05.702989 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.709721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.709781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.709799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.709824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.709843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.742663 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:51:53.34541592 +0000 UTC Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.813493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.813600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.813632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.813669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.813698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.917132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.917175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.917187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.917232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4811]: I0216 20:58:05.917246 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.024488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.024565 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.024581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.024605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.024624 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.129058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.129126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.129145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.129171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.129219 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.232748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.232801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.232814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.232831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.232843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.335486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.335517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.335528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.335542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.335551 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.438602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.438643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.438655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.438671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.438681 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.542410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.542484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.542503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.542533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.542553 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.645865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.645953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.645980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.646012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.646034 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.702096 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.702177 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:06 crc kubenswrapper[4811]: E0216 20:58:06.702278 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:06 crc kubenswrapper[4811]: E0216 20:58:06.702443 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.743711 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:32:31.431986879 +0000 UTC Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.749150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.749191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.749213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.749232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.749244 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.853232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.853303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.853318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.853360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.853378 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.956632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.956672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.956681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.956703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4811]: I0216 20:58:06.956725 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.060615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.060684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.060708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.060741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.060761 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.163728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.163817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.163843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.163874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.163895 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.267149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.267252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.267280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.267311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.267334 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.370583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.370625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.370634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.370653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.370665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.472799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.472878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.472898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.472932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.472954 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.575479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.575557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.575581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.575608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.575628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.678970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.679054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.679082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.679114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.679138 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.702000 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.702249 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:07 crc kubenswrapper[4811]: E0216 20:58:07.702399 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:07 crc kubenswrapper[4811]: E0216 20:58:07.702580 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.744858 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:55:05.286263463 +0000 UTC Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.781312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.781372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.781393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.781418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.781438 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.885079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.885140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.885161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.885185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.885242 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.988715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.988778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.988799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.988826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4811]: I0216 20:58:07.988845 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.091947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.092022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.092047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.092079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.092104 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.195602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.195672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.195690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.195715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.195733 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.299349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.299406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.299416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.299439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.299465 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.402768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.402820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.402833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.402853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.402866 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.505842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.505903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.505921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.505944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.505962 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.608878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.608946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.608955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.608978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.608989 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.702306 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.702430 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:08 crc kubenswrapper[4811]: E0216 20:58:08.702465 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:08 crc kubenswrapper[4811]: E0216 20:58:08.702623 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.714821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.714899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.714927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.714957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.714972 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.745306 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:47:23.042637833 +0000 UTC Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.819183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.819296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.819318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.819352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.819563 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.923607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.923654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.923663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.923685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4811]: I0216 20:58:08.923698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.027258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.027306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.027319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.027521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.027534 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.130080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.130379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.130409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.130443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.130464 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.233443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.233535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.233563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.233593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.233615 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.337157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.337276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.337291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.337311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.337325 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.440409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.440493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.440511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.440555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.440591 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.543780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.543828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.543841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.543861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.543873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.647011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.647074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.647088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.647113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.647128 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.703032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.703071 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:09 crc kubenswrapper[4811]: E0216 20:58:09.703869 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:09 crc kubenswrapper[4811]: E0216 20:58:09.703868 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.746164 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:32:32.352938268 +0000 UTC Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.750628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.750676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.750690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.750713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.750731 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.854374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.854415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.854425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.854440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.854449 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.957609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.957679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.957696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.957731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4811]: I0216 20:58:09.957752 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.061055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.061113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.061124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.061143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.061154 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.164577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.164648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.164667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.164692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.164714 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.267975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.268062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.268080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.268107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.268124 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.370560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.370609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.370620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.370634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.370644 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.474261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.474321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.474341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.474371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.474395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.578324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.579352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.579404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.579440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.579468 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.682281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.682344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.682357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.682380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.682395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.703544 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.703648 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:10 crc kubenswrapper[4811]: E0216 20:58:10.703774 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:10 crc kubenswrapper[4811]: E0216 20:58:10.703889 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.746800 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:14:27.374567322 +0000 UTC Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.784601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.784656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.784675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.784702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.784722 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.888662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.888743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.888767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.888796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.888822 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.907016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.907130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.907153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.907179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.907225 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.988568 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p"] Feb 16 20:58:10 crc kubenswrapper[4811]: I0216 20:58:10.989975 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.000868 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.001371 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.001844 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.002436 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.053148 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.051031961 podStartE2EDuration="1m30.051031961s" podCreationTimestamp="2026-02-16 20:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.025712182 +0000 UTC m=+108.955008120" watchObservedRunningTime="2026-02-16 20:58:11.051031961 +0000 UTC m=+108.980327919" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.072830 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mgctp" podStartSLOduration=88.072785483 podStartE2EDuration="1m28.072785483s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.071792999 +0000 UTC m=+109.001088947" watchObservedRunningTime="2026-02-16 20:58:11.072785483 +0000 UTC m=+109.002081491" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.109894 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l89mr" podStartSLOduration=88.109861299 podStartE2EDuration="1m28.109861299s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.089947052 +0000 UTC m=+109.019243070" watchObservedRunningTime="2026-02-16 20:58:11.109861299 +0000 UTC m=+109.039157237" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.113721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eccbdc2c-067a-43d9-a4bd-3a28660b540b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.113769 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eccbdc2c-067a-43d9-a4bd-3a28660b540b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.113811 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eccbdc2c-067a-43d9-a4bd-3a28660b540b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.114134 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eccbdc2c-067a-43d9-a4bd-3a28660b540b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.114349 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eccbdc2c-067a-43d9-a4bd-3a28660b540b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.165092 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xwj8v" podStartSLOduration=88.165069879 podStartE2EDuration="1m28.165069879s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.16467851 +0000 UTC m=+109.093974448" watchObservedRunningTime="2026-02-16 20:58:11.165069879 +0000 UTC m=+109.094365827" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.187513 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.187486447 podStartE2EDuration="1m1.187486447s" podCreationTimestamp="2026-02-16 20:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.187435196 +0000 UTC m=+109.116731134" watchObservedRunningTime="2026-02-16 20:58:11.187486447 +0000 UTC m=+109.116782395" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eccbdc2c-067a-43d9-a4bd-3a28660b540b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215394 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eccbdc2c-067a-43d9-a4bd-3a28660b540b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eccbdc2c-067a-43d9-a4bd-3a28660b540b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215429 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eccbdc2c-067a-43d9-a4bd-3a28660b540b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215456 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eccbdc2c-067a-43d9-a4bd-3a28660b540b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215506 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eccbdc2c-067a-43d9-a4bd-3a28660b540b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.215552 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eccbdc2c-067a-43d9-a4bd-3a28660b540b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.216769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eccbdc2c-067a-43d9-a4bd-3a28660b540b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.220517 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.220494614 podStartE2EDuration="1m28.220494614s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.220069184 +0000 UTC m=+109.149365142" watchObservedRunningTime="2026-02-16 20:58:11.220494614 +0000 UTC m=+109.149790552" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.231891 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eccbdc2c-067a-43d9-a4bd-3a28660b540b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.232547 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eccbdc2c-067a-43d9-a4bd-3a28660b540b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tg4p\" (UID: \"eccbdc2c-067a-43d9-a4bd-3a28660b540b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.255302 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.255278364 podStartE2EDuration="1m24.255278364s" podCreationTimestamp="2026-02-16 20:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.237436868 +0000 UTC m=+109.166732836" watchObservedRunningTime="2026-02-16 20:58:11.255278364 +0000 UTC m=+109.184574312" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.270854 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-55x7j" podStartSLOduration=88.270821705 podStartE2EDuration="1m28.270821705s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.269841341 +0000 UTC m=+109.199137289" watchObservedRunningTime="2026-02-16 20:58:11.270821705 +0000 UTC m=+109.200117663" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.283310 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podStartSLOduration=88.283282139 podStartE2EDuration="1m28.283282139s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.282389317 +0000 UTC m=+109.211685255" watchObservedRunningTime="2026-02-16 20:58:11.283282139 +0000 UTC m=+109.212578087" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.294337 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.294320179 podStartE2EDuration="23.294320179s" podCreationTimestamp="2026-02-16 20:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.29395273 +0000 UTC m=+109.223248668" watchObservedRunningTime="2026-02-16 20:58:11.294320179 +0000 UTC m=+109.223616117" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.320466 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.360436 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" event={"ID":"eccbdc2c-067a-43d9-a4bd-3a28660b540b","Type":"ContainerStarted","Data":"a1fb3f7b819cd07521bf4f24af0cd0a9977bdaeaf38193013a02ae023bcfbdd1"} Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.361018 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mzmxb" podStartSLOduration=88.361004329 podStartE2EDuration="1m28.361004329s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:11.342816875 +0000 UTC m=+109.272112853" watchObservedRunningTime="2026-02-16 20:58:11.361004329 +0000 UTC m=+109.290300267" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.701947 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.701944 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:11 crc kubenswrapper[4811]: E0216 20:58:11.702867 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:11 crc kubenswrapper[4811]: E0216 20:58:11.703054 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.703273 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 20:58:11 crc kubenswrapper[4811]: E0216 20:58:11.703500 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x2ggt_openshift-ovn-kubernetes(e1bbcd0c-f192-4210-831c-82e87a4768a7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.747040 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:17:34.027937174 +0000 UTC Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.747122 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 20:58:11 crc kubenswrapper[4811]: I0216 20:58:11.757284 4811 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 20:58:12 crc kubenswrapper[4811]: I0216 20:58:12.366460 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" event={"ID":"eccbdc2c-067a-43d9-a4bd-3a28660b540b","Type":"ContainerStarted","Data":"fd5aa1a6f031446be7bee6fd78d76976b523397b58221158d8d26d6e1856147c"} Feb 16 20:58:12 crc kubenswrapper[4811]: I0216 20:58:12.389082 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tg4p" podStartSLOduration=89.389054393 podStartE2EDuration="1m29.389054393s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:12.386835969 +0000 UTC m=+110.316131947" watchObservedRunningTime="2026-02-16 20:58:12.389054393 +0000 UTC m=+110.318350371" Feb 16 20:58:12 crc kubenswrapper[4811]: I0216 20:58:12.702619 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:12 crc kubenswrapper[4811]: I0216 20:58:12.702809 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:12 crc kubenswrapper[4811]: E0216 20:58:12.704162 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:12 crc kubenswrapper[4811]: E0216 20:58:12.704448 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:13 crc kubenswrapper[4811]: I0216 20:58:13.702898 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:13 crc kubenswrapper[4811]: I0216 20:58:13.703077 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:13 crc kubenswrapper[4811]: E0216 20:58:13.703644 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:13 crc kubenswrapper[4811]: E0216 20:58:13.703818 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:14 crc kubenswrapper[4811]: I0216 20:58:14.702228 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:14 crc kubenswrapper[4811]: I0216 20:58:14.702360 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:14 crc kubenswrapper[4811]: E0216 20:58:14.702430 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:14 crc kubenswrapper[4811]: E0216 20:58:14.702552 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:15 crc kubenswrapper[4811]: I0216 20:58:15.702367 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:15 crc kubenswrapper[4811]: I0216 20:58:15.702390 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:15 crc kubenswrapper[4811]: E0216 20:58:15.702611 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:15 crc kubenswrapper[4811]: E0216 20:58:15.702866 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:16 crc kubenswrapper[4811]: I0216 20:58:16.702150 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:16 crc kubenswrapper[4811]: I0216 20:58:16.702316 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:16 crc kubenswrapper[4811]: E0216 20:58:16.702438 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:16 crc kubenswrapper[4811]: E0216 20:58:16.702568 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:17 crc kubenswrapper[4811]: I0216 20:58:17.702435 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:17 crc kubenswrapper[4811]: I0216 20:58:17.702475 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:17 crc kubenswrapper[4811]: E0216 20:58:17.702607 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:17 crc kubenswrapper[4811]: E0216 20:58:17.702715 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:18 crc kubenswrapper[4811]: I0216 20:58:18.703042 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:18 crc kubenswrapper[4811]: I0216 20:58:18.703101 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:18 crc kubenswrapper[4811]: E0216 20:58:18.703350 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:18 crc kubenswrapper[4811]: E0216 20:58:18.703518 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:19 crc kubenswrapper[4811]: I0216 20:58:19.701973 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:19 crc kubenswrapper[4811]: I0216 20:58:19.702121 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:19 crc kubenswrapper[4811]: E0216 20:58:19.702259 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:19 crc kubenswrapper[4811]: E0216 20:58:19.702366 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.417230 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/1.log" Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.420699 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/0.log" Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.420807 4811 generic.go:334] "Generic (PLEG): container finished" podID="a946fefd-e014-48b1-995b-ef221a88bc73" containerID="276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8" exitCode=1 Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.420884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerDied","Data":"276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8"} Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.420943 4811 scope.go:117] "RemoveContainer" containerID="9e5d2cbe8eaf2feb2769b299f596c3240fbb4172d5a3e1f8f79b8d5b8dc1e11b" Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.421765 4811 scope.go:117] "RemoveContainer" containerID="276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8" Feb 16 20:58:20 crc kubenswrapper[4811]: E0216 20:58:20.422053 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mgctp_openshift-multus(a946fefd-e014-48b1-995b-ef221a88bc73)\"" pod="openshift-multus/multus-mgctp" podUID="a946fefd-e014-48b1-995b-ef221a88bc73" Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.702697 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:20 crc kubenswrapper[4811]: E0216 20:58:20.702926 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:20 crc kubenswrapper[4811]: I0216 20:58:20.702727 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:20 crc kubenswrapper[4811]: E0216 20:58:20.703241 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:21 crc kubenswrapper[4811]: I0216 20:58:21.427669 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/1.log" Feb 16 20:58:21 crc kubenswrapper[4811]: I0216 20:58:21.702058 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:21 crc kubenswrapper[4811]: I0216 20:58:21.702117 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:21 crc kubenswrapper[4811]: E0216 20:58:21.702325 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:21 crc kubenswrapper[4811]: E0216 20:58:21.702512 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:22 crc kubenswrapper[4811]: E0216 20:58:22.700079 4811 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 20:58:22 crc kubenswrapper[4811]: I0216 20:58:22.702947 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:22 crc kubenswrapper[4811]: I0216 20:58:22.703019 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:22 crc kubenswrapper[4811]: E0216 20:58:22.704794 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:22 crc kubenswrapper[4811]: E0216 20:58:22.704888 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:22 crc kubenswrapper[4811]: E0216 20:58:22.821314 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 20:58:23 crc kubenswrapper[4811]: I0216 20:58:23.701930 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:23 crc kubenswrapper[4811]: I0216 20:58:23.702033 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:23 crc kubenswrapper[4811]: E0216 20:58:23.702134 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:23 crc kubenswrapper[4811]: E0216 20:58:23.702221 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:24 crc kubenswrapper[4811]: I0216 20:58:24.702682 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:24 crc kubenswrapper[4811]: E0216 20:58:24.702908 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:24 crc kubenswrapper[4811]: I0216 20:58:24.703066 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:24 crc kubenswrapper[4811]: E0216 20:58:24.703743 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:24 crc kubenswrapper[4811]: I0216 20:58:24.704280 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.452302 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/3.log" Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.462248 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerStarted","Data":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.462905 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.505432 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podStartSLOduration=102.505411242 podStartE2EDuration="1m42.505411242s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:25.50533832 +0000 UTC m=+123.434634278" watchObservedRunningTime="2026-02-16 20:58:25.505411242 +0000 UTC m=+123.434707180" Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.702326 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.702516 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:25 crc kubenswrapper[4811]: E0216 20:58:25.702618 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:25 crc kubenswrapper[4811]: E0216 20:58:25.702780 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:25 crc kubenswrapper[4811]: I0216 20:58:25.711532 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7nk7k"] Feb 16 20:58:26 crc kubenswrapper[4811]: I0216 20:58:26.465384 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:26 crc kubenswrapper[4811]: E0216 20:58:26.465840 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:26 crc kubenswrapper[4811]: I0216 20:58:26.702662 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:26 crc kubenswrapper[4811]: E0216 20:58:26.702803 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:26 crc kubenswrapper[4811]: I0216 20:58:26.702665 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:26 crc kubenswrapper[4811]: E0216 20:58:26.703140 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:27 crc kubenswrapper[4811]: I0216 20:58:27.702264 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:27 crc kubenswrapper[4811]: I0216 20:58:27.702308 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:27 crc kubenswrapper[4811]: E0216 20:58:27.702567 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:27 crc kubenswrapper[4811]: E0216 20:58:27.702718 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:27 crc kubenswrapper[4811]: E0216 20:58:27.822974 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 20:58:28 crc kubenswrapper[4811]: I0216 20:58:28.702131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:28 crc kubenswrapper[4811]: I0216 20:58:28.702217 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:28 crc kubenswrapper[4811]: E0216 20:58:28.702339 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:28 crc kubenswrapper[4811]: E0216 20:58:28.702613 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:29 crc kubenswrapper[4811]: I0216 20:58:29.702984 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:29 crc kubenswrapper[4811]: E0216 20:58:29.703251 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:29 crc kubenswrapper[4811]: I0216 20:58:29.703565 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:29 crc kubenswrapper[4811]: E0216 20:58:29.703663 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:30 crc kubenswrapper[4811]: I0216 20:58:30.702445 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:30 crc kubenswrapper[4811]: E0216 20:58:30.702789 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:30 crc kubenswrapper[4811]: I0216 20:58:30.702883 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:30 crc kubenswrapper[4811]: E0216 20:58:30.703153 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:30 crc kubenswrapper[4811]: I0216 20:58:30.703678 4811 scope.go:117] "RemoveContainer" containerID="276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8" Feb 16 20:58:31 crc kubenswrapper[4811]: I0216 20:58:31.488367 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/1.log" Feb 16 20:58:31 crc kubenswrapper[4811]: I0216 20:58:31.488869 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerStarted","Data":"bf50f864995f5e7737f081953d628014fddf69c787e71973d21b61c272b0a372"} Feb 16 20:58:31 crc kubenswrapper[4811]: I0216 20:58:31.701988 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:31 crc kubenswrapper[4811]: I0216 20:58:31.701989 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:31 crc kubenswrapper[4811]: E0216 20:58:31.702569 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:31 crc kubenswrapper[4811]: E0216 20:58:31.702740 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:32 crc kubenswrapper[4811]: I0216 20:58:32.702404 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:32 crc kubenswrapper[4811]: I0216 20:58:32.702404 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:32 crc kubenswrapper[4811]: E0216 20:58:32.704623 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:32 crc kubenswrapper[4811]: E0216 20:58:32.705109 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:32 crc kubenswrapper[4811]: E0216 20:58:32.824546 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 20:58:33 crc kubenswrapper[4811]: I0216 20:58:33.702868 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:33 crc kubenswrapper[4811]: I0216 20:58:33.702947 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:33 crc kubenswrapper[4811]: E0216 20:58:33.703140 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:33 crc kubenswrapper[4811]: E0216 20:58:33.703350 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:34 crc kubenswrapper[4811]: I0216 20:58:34.702983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:34 crc kubenswrapper[4811]: I0216 20:58:34.703061 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:34 crc kubenswrapper[4811]: E0216 20:58:34.703287 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:34 crc kubenswrapper[4811]: E0216 20:58:34.703426 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:35 crc kubenswrapper[4811]: I0216 20:58:35.702834 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:35 crc kubenswrapper[4811]: I0216 20:58:35.702883 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:35 crc kubenswrapper[4811]: E0216 20:58:35.703520 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:35 crc kubenswrapper[4811]: E0216 20:58:35.703661 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:36 crc kubenswrapper[4811]: I0216 20:58:36.702461 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:36 crc kubenswrapper[4811]: E0216 20:58:36.702605 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:36 crc kubenswrapper[4811]: I0216 20:58:36.702464 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:36 crc kubenswrapper[4811]: E0216 20:58:36.702768 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:37 crc kubenswrapper[4811]: I0216 20:58:37.702669 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:37 crc kubenswrapper[4811]: I0216 20:58:37.702731 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:37 crc kubenswrapper[4811]: E0216 20:58:37.702835 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7nk7k" podUID="1b4c0a11-23d9-412e-a5d8-120d622bef57" Feb 16 20:58:37 crc kubenswrapper[4811]: E0216 20:58:37.702957 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:38 crc kubenswrapper[4811]: I0216 20:58:38.702658 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:38 crc kubenswrapper[4811]: I0216 20:58:38.702816 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:38 crc kubenswrapper[4811]: I0216 20:58:38.715349 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 20:58:38 crc kubenswrapper[4811]: I0216 20:58:38.715412 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 20:58:39 crc kubenswrapper[4811]: I0216 20:58:39.702748 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:58:39 crc kubenswrapper[4811]: I0216 20:58:39.702766 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:39 crc kubenswrapper[4811]: I0216 20:58:39.705941 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 20:58:39 crc kubenswrapper[4811]: I0216 20:58:39.707471 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 20:58:39 crc kubenswrapper[4811]: I0216 20:58:39.707735 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 20:58:39 crc kubenswrapper[4811]: I0216 20:58:39.707818 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.386293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.447030 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gx777"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.448069 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.449315 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hxljc"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.450010 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.467031 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.468293 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-njf2g"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.468863 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.468980 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.469165 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.469249 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.470407 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.470597 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.470936 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.471355 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.481645 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.484706 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.485299 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.488100 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.489208 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.490682 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.491746 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.502874 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.503038 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.503068 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.503144 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.503276 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.503467 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.504062 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.504093 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.504103 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.504806 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.509715 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.510599 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.511225 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.511967 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.512560 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.513015 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.513498 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.514122 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.514653 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.518977 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.522019 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.522024 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.522416 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.522570 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.522886 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.524072 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.524727 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.524936 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.525011 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.525191 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.525402 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.525795 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.526255 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.526825 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.527122 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.528107 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.531497 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4f8kg"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.538006 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zwtjs"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.538443 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.538767 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.538935 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-njf2g"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.538968 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.539707 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.540727 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.543312 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gx777"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.543437 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.544780 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hxljc"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.545377 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.545965 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.547758 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.547930 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.548025 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.548110 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.548342 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.548463 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.548527 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.548962 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.549528 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.549706 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.549846 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.550327 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z425"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.551028 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.551420 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.551518 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.551581 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.551702 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.552011 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.552578 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.558232 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.558445 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.558569 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.558686 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.558827 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.559081 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.559254 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.559396 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.559826 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.559940 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.560039 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.560153 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.561800 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.562308 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594821 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce7d7ec-2a9d-4404-917a-da07f09d990d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-config\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594903 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a956e785-7e90-41d8-97ea-d89664b3719a-serving-cert\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-audit\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594959 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pldnp\" (UniqueName: \"kubernetes.io/projected/90270354-a779-4378-8bca-c2ff51ecac2e-kube-api-access-pldnp\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce7d7ec-2a9d-4404-917a-da07f09d990d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.594996 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90270354-a779-4378-8bca-c2ff51ecac2e-config\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595017 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595049 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-serving-cert\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-trusted-ca-bundle\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595130 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cch5x"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595153 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-serving-cert\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595295 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-etcd-client\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595441 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-config\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595609 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54v2p\" (UniqueName: \"kubernetes.io/projected/81a41d1f-0c1d-41cf-991b-f521c34bde80-kube-api-access-54v2p\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595702 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-etcd-serving-ca\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595736 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zb6\" (UniqueName: \"kubernetes.io/projected/a956e785-7e90-41d8-97ea-d89664b3719a-kube-api-access-v5zb6\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595765 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdww\" (UniqueName: \"kubernetes.io/projected/35fa6f12-cf55-48d7-82ef-4987071adff7-kube-api-access-8vdww\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90270354-a779-4378-8bca-c2ff51ecac2e-images\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595820 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-audit-policies\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595863 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-image-import-ca\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595911 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vm5\" (UniqueName: \"kubernetes.io/projected/4ce7d7ec-2a9d-4404-917a-da07f09d990d-kube-api-access-z7vm5\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595936 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-client-ca\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.595956 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35fa6f12-cf55-48d7-82ef-4987071adff7-audit-dir\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596006 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e17f4635-2bd6-4ad1-b337-63c0e87ac247-serving-cert\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81a41d1f-0c1d-41cf-991b-f521c34bde80-node-pullsecrets\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-client-ca\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596144 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-config\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596174 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/90270354-a779-4378-8bca-c2ff51ecac2e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596608 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596635 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-encryption-config\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-etcd-client\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596684 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm56\" (UniqueName: \"kubernetes.io/projected/e17f4635-2bd6-4ad1-b337-63c0e87ac247-kube-api-access-nvm56\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596712 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-encryption-config\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.596735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a41d1f-0c1d-41cf-991b-f521c34bde80-audit-dir\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.598113 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.601481 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mn795"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.602611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.603890 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.604083 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.604096 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.604310 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.604476 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.604489 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.604471 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.606242 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.608782 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.609295 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.617653 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.619783 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.620489 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.620667 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.620702 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.620806 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.621162 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.621341 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.621498 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.621781 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.623386 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.629126 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.629296 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.630240 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.630546 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.636695 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.638553 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.639465 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8vgph"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.639675 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.639915 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.640607 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.640650 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.640741 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.641325 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2cqpn"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.642374 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.646388 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.650291 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.654903 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4km29"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.655581 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.655865 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.656256 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.656663 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.656778 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.656985 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.657133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.657157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.666923 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.669987 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.670969 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.677764 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kk27l"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.678549 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.678943 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.679075 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.682492 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4f8kg"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.686645 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.687903 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mn795"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.689238 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9hxzk"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.689779 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.691607 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.691795 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.692261 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.692746 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.693599 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.693851 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.694398 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.694910 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lbxk8"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.695600 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697427 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-config\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697481 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce7d7ec-2a9d-4404-917a-da07f09d990d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-config\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697573 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a956e785-7e90-41d8-97ea-d89664b3719a-serving-cert\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697617 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqjr\" (UniqueName: \"kubernetes.io/projected/b3a24eed-0751-45b0-945e-6351d15be4f6-kube-api-access-dqqjr\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697656 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-policies\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697683 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjwdv\" (UniqueName: \"kubernetes.io/projected/56ef6d7e-b0bf-4bfa-8426-68040e136fe1-kube-api-access-tjwdv\") pod \"downloads-7954f5f757-mn795\" (UID: \"56ef6d7e-b0bf-4bfa-8426-68040e136fe1\") " pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697708 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20c76084-401b-41ca-ad08-2752d2d7132b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697736 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-config\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-audit\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pldnp\" (UniqueName: \"kubernetes.io/projected/90270354-a779-4378-8bca-c2ff51ecac2e-kube-api-access-pldnp\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697829 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697852 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697879 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20c76084-401b-41ca-ad08-2752d2d7132b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697900 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce7d7ec-2a9d-4404-917a-da07f09d990d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90270354-a779-4378-8bca-c2ff51ecac2e-config\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-serving-cert\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697969 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-registry-certificates\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.697989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698017 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-trusted-ca-bundle\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698035 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-serving-cert\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698059 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/218883f2-cdcd-4b76-8f3c-dea0af40092c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qjhps\" (UID: \"218883f2-cdcd-4b76-8f3c-dea0af40092c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698079 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-trusted-ca\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698105 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-etcd-client\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698151 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-dir\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698220 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698246 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-service-ca-bundle\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698266 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4cb2\" (UniqueName: \"kubernetes.io/projected/79f24eee-94ca-47b2-bcc5-389f01bf5849-kube-api-access-r4cb2\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-config\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698318 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54v2p\" (UniqueName: \"kubernetes.io/projected/81a41d1f-0c1d-41cf-991b-f521c34bde80-kube-api-access-54v2p\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a24eed-0751-45b0-945e-6351d15be4f6-config\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698398 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgk99\" (UniqueName: \"kubernetes.io/projected/a101d06e-8e7f-4fcf-9788-a54237068ad7-kube-api-access-hgk99\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8d67f2-74fc-4244-a62e-97fed3b28c79-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698450 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5zb6\" (UniqueName: \"kubernetes.io/projected/a956e785-7e90-41d8-97ea-d89664b3719a-kube-api-access-v5zb6\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698471 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdww\" (UniqueName: \"kubernetes.io/projected/35fa6f12-cf55-48d7-82ef-4987071adff7-kube-api-access-8vdww\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698497 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-bound-sa-token\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-etcd-serving-ca\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698544 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90270354-a779-4378-8bca-c2ff51ecac2e-images\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698564 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-audit-policies\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698606 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5qh8\" (UniqueName: \"kubernetes.io/projected/7ff60cdb-3618-4902-a679-e5bda29c5c60-kube-api-access-h5qh8\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698628 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8d67f2-74fc-4244-a62e-97fed3b28c79-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698664 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-image-import-ca\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vm5\" (UniqueName: \"kubernetes.io/projected/4ce7d7ec-2a9d-4404-917a-da07f09d990d-kube-api-access-z7vm5\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698706 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-client-ca\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698729 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35fa6f12-cf55-48d7-82ef-4987071adff7-audit-dir\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698754 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f24eee-94ca-47b2-bcc5-389f01bf5849-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.698776 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.699252 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.699632 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-audit\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.699979 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.700205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-config\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.700508 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.700837 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.700979 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-client-ca\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-trusted-ca-bundle\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701468 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701574 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90270354-a779-4378-8bca-c2ff51ecac2e-images\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701805 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-etcd-serving-ca\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701859 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35fa6f12-cf55-48d7-82ef-4987071adff7-audit-dir\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701916 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90270354-a779-4378-8bca-c2ff51ecac2e-config\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.701954 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-audit-policies\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702037 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e17f4635-2bd6-4ad1-b337-63c0e87ac247-serving-cert\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702335 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702345 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702409 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-trusted-ca\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702498 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702536 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81a41d1f-0c1d-41cf-991b-f521c34bde80-node-pullsecrets\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702588 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-client-ca\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702660 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f24eee-94ca-47b2-bcc5-389f01bf5849-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce7d7ec-2a9d-4404-917a-da07f09d990d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702902 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702963 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8d67f2-74fc-4244-a62e-97fed3b28c79-config\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.702996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-config\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703015 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/90270354-a779-4378-8bca-c2ff51ecac2e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703089 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703120 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703144 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-encryption-config\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703182 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3a24eed-0751-45b0-945e-6351d15be4f6-auth-proxy-config\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703221 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b3a24eed-0751-45b0-945e-6351d15be4f6-machine-approver-tls\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703253 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703241 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81a41d1f-0c1d-41cf-991b-f521c34bde80-node-pullsecrets\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703275 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-etcd-client\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703315 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-registry-tls\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703376 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vpb\" (UniqueName: \"kubernetes.io/projected/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-kube-api-access-p2vpb\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-serving-cert\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvm56\" (UniqueName: \"kubernetes.io/projected/e17f4635-2bd6-4ad1-b337-63c0e87ac247-kube-api-access-nvm56\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703465 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-encryption-config\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703484 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a41d1f-0c1d-41cf-991b-f521c34bde80-audit-dir\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703526 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt45r\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-kube-api-access-pt45r\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703554 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzdsd\" (UniqueName: \"kubernetes.io/projected/218883f2-cdcd-4b76-8f3c-dea0af40092c-kube-api-access-mzdsd\") pod \"cluster-samples-operator-665b6dd947-qjhps\" (UID: \"218883f2-cdcd-4b76-8f3c-dea0af40092c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703592 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a101d06e-8e7f-4fcf-9788-a54237068ad7-serving-cert\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703588 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35fa6f12-cf55-48d7-82ef-4987071adff7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.703613 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.704309 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-config\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.704377 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a41d1f-0c1d-41cf-991b-f521c34bde80-audit-dir\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.705915 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.706004 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-config\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.706310 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-client-ca\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.706420 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8vgph"] Feb 16 20:58:41 crc kubenswrapper[4811]: E0216 20:58:41.706522 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.206495989 +0000 UTC m=+140.135791997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.707012 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.707333 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n8rd6"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.717442 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-serving-cert\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.724176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-etcd-client\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.724733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-etcd-client\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.725440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a956e785-7e90-41d8-97ea-d89664b3719a-serving-cert\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.725880 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-encryption-config\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.726032 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/90270354-a779-4378-8bca-c2ff51ecac2e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.726218 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35fa6f12-cf55-48d7-82ef-4987071adff7-encryption-config\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.726344 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81a41d1f-0c1d-41cf-991b-f521c34bde80-image-import-ca\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.726627 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a41d1f-0c1d-41cf-991b-f521c34bde80-serving-cert\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.727916 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z67dz"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.728616 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e17f4635-2bd6-4ad1-b337-63c0e87ac247-serving-cert\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.729110 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.729916 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.730374 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.730607 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.732065 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tm698"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.733134 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.733143 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.735717 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ttddd"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.736375 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.740435 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.740493 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cch5x"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.740507 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.740619 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.740972 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n8rd6"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.741424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce7d7ec-2a9d-4404-917a-da07f09d990d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.745385 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.745472 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nlp5w"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.747782 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.747906 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.749563 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kk27l"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.751038 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hbhzt"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.752706 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9hxzk"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.752811 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.755044 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zwtjs"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.756841 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.757317 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.758328 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.759823 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z425"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.761619 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4km29"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.762832 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2cqpn"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.766077 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.767310 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.768865 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.771329 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z67dz"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.773334 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.774915 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.776392 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.777966 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.779551 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ttddd"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.781349 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nlp5w"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.783077 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.785140 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.789810 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tm698"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.792438 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvkhm"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.792618 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.793721 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvkhm"] Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.793860 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.804317 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:41 crc kubenswrapper[4811]: E0216 20:58:41.804436 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.30441306 +0000 UTC m=+140.233708998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.804671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwg4\" (UniqueName: \"kubernetes.io/projected/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-kube-api-access-4mwg4\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.804703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.804767 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f24eee-94ca-47b2-bcc5-389f01bf5849-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.804818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805621 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.804851 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvwm\" (UniqueName: \"kubernetes.io/projected/b848efbd-79a2-4b6b-a42f-36f109a33e01-kube-api-access-6jvwm\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-config\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805754 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/45722898-287e-4a8e-8816-5928e178d2d7-kube-api-access-kqmxf\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3a24eed-0751-45b0-945e-6351d15be4f6-auth-proxy-config\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805800 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805824 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-serving-cert\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2vpb\" (UniqueName: \"kubernetes.io/projected/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-kube-api-access-p2vpb\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805868 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f1912bb-76f1-493c-b982-2a75e48cb649-metrics-tls\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805887 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-serving-cert\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-default-certificate\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hqw\" (UniqueName: \"kubernetes.io/projected/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-kube-api-access-v6hqw\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805950 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a101d06e-8e7f-4fcf-9788-a54237068ad7-serving-cert\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805971 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d76477a2-14d0-4d86-b850-a980bf3ca21a-images\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.805990 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zg85\" (UniqueName: \"kubernetes.io/projected/c699beb7-358c-424b-ab7e-cd1396bd8803-kube-api-access-6zg85\") pod \"migrator-59844c95c7-nl2ks\" (UID: \"c699beb7-358c-424b-ab7e-cd1396bd8803\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806011 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-config\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806030 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-cert\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d76477a2-14d0-4d86-b850-a980bf3ca21a-proxy-tls\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjwdv\" (UniqueName: \"kubernetes.io/projected/56ef6d7e-b0bf-4bfa-8426-68040e136fe1-kube-api-access-tjwdv\") pod \"downloads-7954f5f757-mn795\" (UID: \"56ef6d7e-b0bf-4bfa-8426-68040e136fe1\") " pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-policies\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806108 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-config\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806155 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806175 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20c76084-401b-41ca-ad08-2752d2d7132b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806217 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbrvq\" (UniqueName: \"kubernetes.io/projected/7f1912bb-76f1-493c-b982-2a75e48cb649-kube-api-access-lbrvq\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806237 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-trusted-ca\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806294 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-client\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a24eed-0751-45b0-945e-6351d15be4f6-config\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eddedb9f-4d8f-467e-94a0-3e2b45746f42-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gdcrk\" (UID: \"eddedb9f-4d8f-467e-94a0-3e2b45746f42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806360 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgk99\" (UniqueName: \"kubernetes.io/projected/a101d06e-8e7f-4fcf-9788-a54237068ad7-kube-api-access-hgk99\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8d67f2-74fc-4244-a62e-97fed3b28c79-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806414 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806436 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5qh8\" (UniqueName: \"kubernetes.io/projected/7ff60cdb-3618-4902-a679-e5bda29c5c60-kube-api-access-h5qh8\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806455 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: E0216 20:58:41.806475 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.306453551 +0000 UTC m=+140.235749489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806515 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e30dd8d-c885-4715-916c-2f87ff167589-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806556 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-config\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806601 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f24eee-94ca-47b2-bcc5-389f01bf5849-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806651 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-metrics-certs\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-config\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806710 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-trusted-ca\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806732 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806765 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhj5t\" (UniqueName: \"kubernetes.io/projected/2d817b52-21fc-40d9-a36f-487e6719ebfe-kube-api-access-hhj5t\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806909 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.806999 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20c76084-401b-41ca-ad08-2752d2d7132b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.807137 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3a24eed-0751-45b0-945e-6351d15be4f6-auth-proxy-config\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.807737 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-oauth-config\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.807817 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8d67f2-74fc-4244-a62e-97fed3b28c79-config\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.807877 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f1912bb-76f1-493c-b982-2a75e48cb649-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.807962 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-config\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.808091 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-policies\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.808391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-config\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.808754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8d67f2-74fc-4244-a62e-97fed3b28c79-config\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.808939 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.808937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-trusted-ca\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.808984 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fae08180-6d56-48f5-99c6-d98b52eb0ccf-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kk27l\" (UID: \"fae08180-6d56-48f5-99c6-d98b52eb0ccf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.809417 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a24eed-0751-45b0-945e-6351d15be4f6-config\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.809572 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b3a24eed-0751-45b0-945e-6351d15be4f6-machine-approver-tls\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.809625 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-registry-tls\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.809753 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-stats-auth\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.809897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-serving-cert\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.809950 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xx2\" (UniqueName: \"kubernetes.io/projected/fae08180-6d56-48f5-99c6-d98b52eb0ccf-kube-api-access-c4xx2\") pod \"multus-admission-controller-857f4d67dd-kk27l\" (UID: \"fae08180-6d56-48f5-99c6-d98b52eb0ccf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt45r\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-kube-api-access-pt45r\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810126 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzdsd\" (UniqueName: \"kubernetes.io/projected/218883f2-cdcd-4b76-8f3c-dea0af40092c-kube-api-access-mzdsd\") pod \"cluster-samples-operator-665b6dd947-qjhps\" (UID: \"218883f2-cdcd-4b76-8f3c-dea0af40092c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810166 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810221 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqjr\" (UniqueName: \"kubernetes.io/projected/b3a24eed-0751-45b0-945e-6351d15be4f6-kube-api-access-dqqjr\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810255 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d817b52-21fc-40d9-a36f-487e6719ebfe-service-ca-bundle\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20c76084-401b-41ca-ad08-2752d2d7132b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d76477a2-14d0-4d86-b850-a980bf3ca21a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810364 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-service-ca\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810395 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72ts\" (UniqueName: \"kubernetes.io/projected/eddedb9f-4d8f-467e-94a0-3e2b45746f42-kube-api-access-h72ts\") pod \"package-server-manager-789f6589d5-gdcrk\" (UID: \"eddedb9f-4d8f-467e-94a0-3e2b45746f42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxt2\" (UniqueName: \"kubernetes.io/projected/9e30dd8d-c885-4715-916c-2f87ff167589-kube-api-access-lmxt2\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810474 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1912bb-76f1-493c-b982-2a75e48cb649-trusted-ca\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810503 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-trusted-ca-bundle\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f24eee-94ca-47b2-bcc5-389f01bf5849-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810579 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-registry-certificates\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810635 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810711 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f24eee-94ca-47b2-bcc5-389f01bf5849-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810916 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-trusted-ca\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.810983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/218883f2-cdcd-4b76-8f3c-dea0af40092c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qjhps\" (UID: \"218883f2-cdcd-4b76-8f3c-dea0af40092c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.811795 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52n6w\" (UniqueName: \"kubernetes.io/projected/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-kube-api-access-52n6w\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.811847 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e30dd8d-c885-4715-916c-2f87ff167589-proxy-tls\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.811892 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-dir\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.811910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-registry-certificates\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.811912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-ca\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812121 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-service-ca\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-service-ca-bundle\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812253 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4cb2\" (UniqueName: \"kubernetes.io/projected/79f24eee-94ca-47b2-bcc5-389f01bf5849-kube-api-access-r4cb2\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812284 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-oauth-serving-cert\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812337 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812409 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ncs\" (UniqueName: \"kubernetes.io/projected/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-kube-api-access-j2ncs\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812460 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b848efbd-79a2-4b6b-a42f-36f109a33e01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-bound-sa-token\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.812525 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zzt\" (UniqueName: \"kubernetes.io/projected/d76477a2-14d0-4d86-b850-a980bf3ca21a-kube-api-access-85zzt\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.813029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.813043 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-dir\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.813146 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.813145 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8d67f2-74fc-4244-a62e-97fed3b28c79-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.813983 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101d06e-8e7f-4fcf-9788-a54237068ad7-service-ca-bundle\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.814021 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8d67f2-74fc-4244-a62e-97fed3b28c79-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.814306 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-registry-tls\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.814430 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a101d06e-8e7f-4fcf-9788-a54237068ad7-serving-cert\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.815554 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.815842 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.816395 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.817275 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-serving-cert\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.818650 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20c76084-401b-41ca-ad08-2752d2d7132b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.820746 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.821638 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/218883f2-cdcd-4b76-8f3c-dea0af40092c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qjhps\" (UID: \"218883f2-cdcd-4b76-8f3c-dea0af40092c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.821876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.822027 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.824400 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.824800 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b3a24eed-0751-45b0-945e-6351d15be4f6-machine-approver-tls\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.832416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.833997 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.834625 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.851089 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.870726 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.890982 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.912447 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.914998 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxt2\" (UniqueName: \"kubernetes.io/projected/9e30dd8d-c885-4715-916c-2f87ff167589-kube-api-access-lmxt2\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1912bb-76f1-493c-b982-2a75e48cb649-trusted-ca\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915420 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: E0216 20:58:41.915461 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.415422011 +0000 UTC m=+140.344717959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915514 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-trusted-ca-bundle\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915585 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52n6w\" (UniqueName: \"kubernetes.io/projected/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-kube-api-access-52n6w\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915629 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e30dd8d-c885-4715-916c-2f87ff167589-proxy-tls\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915666 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-service-ca\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915700 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-ca\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.915738 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ncs\" (UniqueName: \"kubernetes.io/projected/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-kube-api-access-j2ncs\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-oauth-serving-cert\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916448 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b848efbd-79a2-4b6b-a42f-36f109a33e01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916489 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zzt\" (UniqueName: \"kubernetes.io/projected/d76477a2-14d0-4d86-b850-a980bf3ca21a-kube-api-access-85zzt\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwg4\" (UniqueName: \"kubernetes.io/projected/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-kube-api-access-4mwg4\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916565 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916595 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvwm\" (UniqueName: \"kubernetes.io/projected/b848efbd-79a2-4b6b-a42f-36f109a33e01-kube-api-access-6jvwm\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916624 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-config\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916648 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/45722898-287e-4a8e-8816-5928e178d2d7-kube-api-access-kqmxf\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916675 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916699 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-serving-cert\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f1912bb-76f1-493c-b982-2a75e48cb649-metrics-tls\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916778 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-serving-cert\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-default-certificate\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916831 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6hqw\" (UniqueName: \"kubernetes.io/projected/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-kube-api-access-v6hqw\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916855 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d76477a2-14d0-4d86-b850-a980bf3ca21a-images\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zg85\" (UniqueName: \"kubernetes.io/projected/c699beb7-358c-424b-ab7e-cd1396bd8803-kube-api-access-6zg85\") pod \"migrator-59844c95c7-nl2ks\" (UID: \"c699beb7-358c-424b-ab7e-cd1396bd8803\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-cert\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916933 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d76477a2-14d0-4d86-b850-a980bf3ca21a-proxy-tls\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1912bb-76f1-493c-b982-2a75e48cb649-trusted-ca\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.916970 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbrvq\" (UniqueName: \"kubernetes.io/projected/7f1912bb-76f1-493c-b982-2a75e48cb649-kube-api-access-lbrvq\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917048 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-client\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917098 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eddedb9f-4d8f-467e-94a0-3e2b45746f42-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gdcrk\" (UID: \"eddedb9f-4d8f-467e-94a0-3e2b45746f42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917148 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-config\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917189 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e30dd8d-c885-4715-916c-2f87ff167589-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917269 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-metrics-certs\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917294 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-config\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhj5t\" (UniqueName: \"kubernetes.io/projected/2d817b52-21fc-40d9-a36f-487e6719ebfe-kube-api-access-hhj5t\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917374 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-oauth-config\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917402 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f1912bb-76f1-493c-b982-2a75e48cb649-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fae08180-6d56-48f5-99c6-d98b52eb0ccf-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kk27l\" (UID: \"fae08180-6d56-48f5-99c6-d98b52eb0ccf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-stats-auth\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xx2\" (UniqueName: \"kubernetes.io/projected/fae08180-6d56-48f5-99c6-d98b52eb0ccf-kube-api-access-c4xx2\") pod \"multus-admission-controller-857f4d67dd-kk27l\" (UID: \"fae08180-6d56-48f5-99c6-d98b52eb0ccf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917509 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d817b52-21fc-40d9-a36f-487e6719ebfe-service-ca-bundle\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917585 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d76477a2-14d0-4d86-b850-a980bf3ca21a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.917695 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-trusted-ca-bundle\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: E0216 20:58:41.917714 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.417685168 +0000 UTC m=+140.346981116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.918068 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-service-ca\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.918080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-service-ca\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.918327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-oauth-serving-cert\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.919156 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72ts\" (UniqueName: \"kubernetes.io/projected/eddedb9f-4d8f-467e-94a0-3e2b45746f42-kube-api-access-h72ts\") pod \"package-server-manager-789f6589d5-gdcrk\" (UID: \"eddedb9f-4d8f-467e-94a0-3e2b45746f42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.920175 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-config\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.920764 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d76477a2-14d0-4d86-b850-a980bf3ca21a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.920985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e30dd8d-c885-4715-916c-2f87ff167589-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.921753 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.922008 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.922364 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-config\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.923074 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-serving-cert\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.923364 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-oauth-config\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.925529 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.930803 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.951527 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.971472 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 20:58:41 crc kubenswrapper[4811]: I0216 20:58:41.991836 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.011967 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.020286 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.020525 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.520469281 +0000 UTC m=+140.449765229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.021086 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.021247 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f1912bb-76f1-493c-b982-2a75e48cb649-metrics-tls\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.021873 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.521851946 +0000 UTC m=+140.451147894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.031706 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.050434 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.071446 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.090539 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.110811 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.122645 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.122793 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.622764802 +0000 UTC m=+140.552060750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.123269 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.123955 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.623944212 +0000 UTC m=+140.553240160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.130983 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.152600 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.170939 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.191592 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.210658 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.225250 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.225502 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.725463533 +0000 UTC m=+140.654759491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.225627 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.226242 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.726216682 +0000 UTC m=+140.655512780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.231037 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.250535 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.271596 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.291372 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.311883 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.327818 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.328058 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.828021431 +0000 UTC m=+140.757317379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.328944 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.329381 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.829370455 +0000 UTC m=+140.758666403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.330792 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.346215 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fae08180-6d56-48f5-99c6-d98b52eb0ccf-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kk27l\" (UID: \"fae08180-6d56-48f5-99c6-d98b52eb0ccf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.350917 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.372259 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.391577 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.411182 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.424403 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-serving-cert\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.429420 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.429612 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.929566663 +0000 UTC m=+140.858862641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.430113 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.430722 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:42.930702142 +0000 UTC m=+140.859998110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.432481 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.439012 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-ca\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.451285 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.464574 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-client\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.472398 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.480245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-etcd-service-ca\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.490984 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.498660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-config\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.511539 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.531126 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.531600 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.031551746 +0000 UTC m=+140.960847724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.531952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.532083 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.532549 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.032520571 +0000 UTC m=+140.961816549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.551667 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.571469 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.592275 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.612605 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.630860 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.633582 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.633810 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.133775735 +0000 UTC m=+141.063071713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.634938 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.635551 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.13553449 +0000 UTC m=+141.064830458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.644547 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eddedb9f-4d8f-467e-94a0-3e2b45746f42-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gdcrk\" (UID: \"eddedb9f-4d8f-467e-94a0-3e2b45746f42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.651580 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.662743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e30dd8d-c885-4715-916c-2f87ff167589-proxy-tls\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.671902 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.691500 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.709657 4811 request.go:700] Waited for 1.013740237s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0 Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.712838 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.722392 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-metrics-certs\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.730883 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.736451 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.736653 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.23662301 +0000 UTC m=+141.165918948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.737010 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.737344 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.237333888 +0000 UTC m=+141.166629826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.743596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-stats-auth\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.751749 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.771210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.783390 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d817b52-21fc-40d9-a36f-487e6719ebfe-default-certificate\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.791608 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.801575 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d817b52-21fc-40d9-a36f-487e6719ebfe-service-ca-bundle\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.811818 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.837766 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.837956 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.337919286 +0000 UTC m=+141.267215224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.838634 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.839488 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.339477945 +0000 UTC m=+141.268773883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.866217 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vm5\" (UniqueName: \"kubernetes.io/projected/4ce7d7ec-2a9d-4404-917a-da07f09d990d-kube-api-access-z7vm5\") pod \"openshift-apiserver-operator-796bbdcf4f-rpx95\" (UID: \"4ce7d7ec-2a9d-4404-917a-da07f09d990d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.878558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdww\" (UniqueName: \"kubernetes.io/projected/35fa6f12-cf55-48d7-82ef-4987071adff7-kube-api-access-8vdww\") pod \"apiserver-7bbb656c7d-fv7gf\" (UID: \"35fa6f12-cf55-48d7-82ef-4987071adff7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.894139 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54v2p\" (UniqueName: \"kubernetes.io/projected/81a41d1f-0c1d-41cf-991b-f521c34bde80-kube-api-access-54v2p\") pod \"apiserver-76f77b778f-njf2g\" (UID: \"81a41d1f-0c1d-41cf-991b-f521c34bde80\") " pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.913719 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.917733 4811 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.917778 4811 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.917859 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b848efbd-79a2-4b6b-a42f-36f109a33e01-control-plane-machine-set-operator-tls podName:b848efbd-79a2-4b6b-a42f-36f109a33e01 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.417838562 +0000 UTC m=+141.347134500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/b848efbd-79a2-4b6b-a42f-36f109a33e01-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-qnxsg" (UID: "b848efbd-79a2-4b6b-a42f-36f109a33e01") : failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.917887 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca podName:45722898-287e-4a8e-8816-5928e178d2d7 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.417875703 +0000 UTC m=+141.347171641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca") pod "marketplace-operator-79b997595-n8rd6" (UID: "45722898-287e-4a8e-8816-5928e178d2d7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.918941 4811 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.919057 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics podName:45722898-287e-4a8e-8816-5928e178d2d7 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.419032822 +0000 UTC m=+141.348328770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics") pod "marketplace-operator-79b997595-n8rd6" (UID: "45722898-287e-4a8e-8816-5928e178d2d7") : failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.919179 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d76477a2-14d0-4d86-b850-a980bf3ca21a-images\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.919187 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5zb6\" (UniqueName: \"kubernetes.io/projected/a956e785-7e90-41d8-97ea-d89664b3719a-kube-api-access-v5zb6\") pod \"controller-manager-879f6c89f-hxljc\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.921082 4811 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.921134 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d76477a2-14d0-4d86-b850-a980bf3ca21a-proxy-tls podName:d76477a2-14d0-4d86-b850-a980bf3ca21a nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.421124265 +0000 UTC m=+141.350420203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d76477a2-14d0-4d86-b850-a980bf3ca21a-proxy-tls") pod "machine-config-operator-74547568cd-8hwk8" (UID: "d76477a2-14d0-4d86-b850-a980bf3ca21a") : failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.921145 4811 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.921264 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-cert podName:a354d7fc-db3d-4d2b-bab5-973e5fb71d3e nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.421240438 +0000 UTC m=+141.350536416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-cert") pod "ingress-canary-ttddd" (UID: "a354d7fc-db3d-4d2b-bab5-973e5fb71d3e") : failed to sync secret cache: timed out waiting for the condition Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.931221 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.939728 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.939885 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.439868458 +0000 UTC m=+141.369164406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.941278 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:42 crc kubenswrapper[4811]: E0216 20:58:42.941673 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.441661753 +0000 UTC m=+141.370957701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.951430 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.991912 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pldnp\" (UniqueName: \"kubernetes.io/projected/90270354-a779-4378-8bca-c2ff51ecac2e-kube-api-access-pldnp\") pod \"machine-api-operator-5694c8668f-gx777\" (UID: \"90270354-a779-4378-8bca-c2ff51ecac2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:42 crc kubenswrapper[4811]: I0216 20:58:42.993165 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.003927 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.016619 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.022374 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.031911 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.042614 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.042840 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.542801685 +0000 UTC m=+141.472097663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.043346 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.043934 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.543916593 +0000 UTC m=+141.473212561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.048127 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.071152 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.074775 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvm56\" (UniqueName: \"kubernetes.io/projected/e17f4635-2bd6-4ad1-b337-63c0e87ac247-kube-api-access-nvm56\") pod \"route-controller-manager-6576b87f9c-zqrkd\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.091033 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.091874 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.102745 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.112848 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.133649 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.145641 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.145814 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.645782263 +0000 UTC m=+141.575078201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.147356 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.148255 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.648228635 +0000 UTC m=+141.577524613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.154072 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.172166 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.198451 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.213424 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.231607 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.249306 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.249596 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.749565982 +0000 UTC m=+141.678861920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.250910 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.271379 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.287699 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.294982 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.311325 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.332246 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.351688 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.352259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.352626 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.852608292 +0000 UTC m=+141.781904230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.372336 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.390703 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.411718 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.414455 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf"] Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.433557 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.452520 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.453472 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.453764 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.453835 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.953802165 +0000 UTC m=+141.883098103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.454780 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b848efbd-79a2-4b6b-a42f-36f109a33e01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.454859 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.454910 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.454950 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-cert\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.454971 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d76477a2-14d0-4d86-b850-a980bf3ca21a-proxy-tls\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.455619 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:43.95559781 +0000 UTC m=+141.884893748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.456899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.459468 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.459794 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d76477a2-14d0-4d86-b850-a980bf3ca21a-proxy-tls\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.460731 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b848efbd-79a2-4b6b-a42f-36f109a33e01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.464958 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gx777"] Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.470792 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: W0216 20:58:43.474329 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90270354_a779_4378_8bca_c2ff51ecac2e.slice/crio-9e91314dbe9799ba312c7b07c9510868ab652ef5a2527563bb0da94bd464378f WatchSource:0}: Error finding container 9e91314dbe9799ba312c7b07c9510868ab652ef5a2527563bb0da94bd464378f: Status 404 returned error can't find the container with id 9e91314dbe9799ba312c7b07c9510868ab652ef5a2527563bb0da94bd464378f Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.497807 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.501330 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hxljc"] Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.508729 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95"] Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.512815 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 20:58:43 crc kubenswrapper[4811]: W0216 20:58:43.514919 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda956e785_7e90_41d8_97ea_d89664b3719a.slice/crio-82026325063e3af3d37265c255b4aaa85dd818908c6e7f08d24129658da11c87 WatchSource:0}: Error finding container 82026325063e3af3d37265c255b4aaa85dd818908c6e7f08d24129658da11c87: Status 404 returned error can't find the container with id 82026325063e3af3d37265c255b4aaa85dd818908c6e7f08d24129658da11c87 Feb 16 20:58:43 crc kubenswrapper[4811]: W0216 20:58:43.518645 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce7d7ec_2a9d_4404_917a_da07f09d990d.slice/crio-d05570b5f9d5d132d5ed7c13044b2b4a57a77ecbb72be20f6630f6eb160ff517 WatchSource:0}: Error finding container d05570b5f9d5d132d5ed7c13044b2b4a57a77ecbb72be20f6630f6eb160ff517: Status 404 returned error can't find the container with id d05570b5f9d5d132d5ed7c13044b2b4a57a77ecbb72be20f6630f6eb160ff517 Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.533635 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.540383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-cert\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.551598 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.557393 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.057360928 +0000 UTC m=+141.986656866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.557545 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.557978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.558838 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.058828775 +0000 UTC m=+141.988124713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.560853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-njf2g"] Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.573032 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.574700 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd"] Feb 16 20:58:43 crc kubenswrapper[4811]: W0216 20:58:43.585516 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81a41d1f_0c1d_41cf_991b_f521c34bde80.slice/crio-971740d6d5c19d4b95e0c3c8a7f335be229806c8ec6a57ff7a65ce54e4bd20a7 WatchSource:0}: Error finding container 971740d6d5c19d4b95e0c3c8a7f335be229806c8ec6a57ff7a65ce54e4bd20a7: Status 404 returned error can't find the container with id 971740d6d5c19d4b95e0c3c8a7f335be229806c8ec6a57ff7a65ce54e4bd20a7 Feb 16 20:58:43 crc kubenswrapper[4811]: W0216 20:58:43.587918 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17f4635_2bd6_4ad1_b337_63c0e87ac247.slice/crio-384489e297e4886d9b56ecc145374ec2aa2698f3309074a36628fe69b2e6ac08 WatchSource:0}: Error finding container 384489e297e4886d9b56ecc145374ec2aa2698f3309074a36628fe69b2e6ac08: Status 404 returned error can't find the container with id 384489e297e4886d9b56ecc145374ec2aa2698f3309074a36628fe69b2e6ac08 Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.590681 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.607709 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" event={"ID":"90270354-a779-4378-8bca-c2ff51ecac2e","Type":"ContainerStarted","Data":"9e91314dbe9799ba312c7b07c9510868ab652ef5a2527563bb0da94bd464378f"} Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.608839 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" event={"ID":"81a41d1f-0c1d-41cf-991b-f521c34bde80","Type":"ContainerStarted","Data":"971740d6d5c19d4b95e0c3c8a7f335be229806c8ec6a57ff7a65ce54e4bd20a7"} Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.611026 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" event={"ID":"4ce7d7ec-2a9d-4404-917a-da07f09d990d","Type":"ContainerStarted","Data":"d05570b5f9d5d132d5ed7c13044b2b4a57a77ecbb72be20f6630f6eb160ff517"} Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.618032 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" event={"ID":"35fa6f12-cf55-48d7-82ef-4987071adff7","Type":"ContainerStarted","Data":"313c712f58e6990b979c378ca82318c172db99d668f579e65a3853329d9ab25b"} Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.621216 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" event={"ID":"e17f4635-2bd6-4ad1-b337-63c0e87ac247","Type":"ContainerStarted","Data":"384489e297e4886d9b56ecc145374ec2aa2698f3309074a36628fe69b2e6ac08"} Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.622928 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" event={"ID":"a956e785-7e90-41d8-97ea-d89664b3719a","Type":"ContainerStarted","Data":"82026325063e3af3d37265c255b4aaa85dd818908c6e7f08d24129658da11c87"} Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.631766 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.650556 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.660550 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.660760 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.160734336 +0000 UTC m=+142.090030274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.661079 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.661427 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.161416853 +0000 UTC m=+142.090712791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.671810 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.710614 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.729399 4811 request.go:700] Waited for 1.935270958s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.732362 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.755571 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.775109 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.775841 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.275814499 +0000 UTC m=+142.205110437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.812115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2vpb\" (UniqueName: \"kubernetes.io/projected/48e7d3f9-4e00-476b-90bc-9d238ef4f5ca-kube-api-access-p2vpb\") pod \"console-operator-58897d9998-cch5x\" (UID: \"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca\") " pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.817088 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgk99\" (UniqueName: \"kubernetes.io/projected/a101d06e-8e7f-4fcf-9788-a54237068ad7-kube-api-access-hgk99\") pod \"authentication-operator-69f744f599-zwtjs\" (UID: \"a101d06e-8e7f-4fcf-9788-a54237068ad7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.827998 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjwdv\" (UniqueName: \"kubernetes.io/projected/56ef6d7e-b0bf-4bfa-8426-68040e136fe1-kube-api-access-tjwdv\") pod \"downloads-7954f5f757-mn795\" (UID: \"56ef6d7e-b0bf-4bfa-8426-68040e136fe1\") " pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.859432 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.866616 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.876277 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5qh8\" (UniqueName: \"kubernetes.io/projected/7ff60cdb-3618-4902-a679-e5bda29c5c60-kube-api-access-h5qh8\") pod \"oauth-openshift-558db77b4-4f8kg\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.876888 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.877405 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.377393942 +0000 UTC m=+142.306689880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.888386 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt45r\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-kube-api-access-pt45r\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.901136 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzdsd\" (UniqueName: \"kubernetes.io/projected/218883f2-cdcd-4b76-8f3c-dea0af40092c-kube-api-access-mzdsd\") pod \"cluster-samples-operator-665b6dd947-qjhps\" (UID: \"218883f2-cdcd-4b76-8f3c-dea0af40092c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.906121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqjr\" (UniqueName: \"kubernetes.io/projected/b3a24eed-0751-45b0-945e-6351d15be4f6-kube-api-access-dqqjr\") pod \"machine-approver-56656f9798-g82fx\" (UID: \"b3a24eed-0751-45b0-945e-6351d15be4f6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.925637 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4cb2\" (UniqueName: \"kubernetes.io/projected/79f24eee-94ca-47b2-bcc5-389f01bf5849-kube-api-access-r4cb2\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgtxl\" (UID: \"79f24eee-94ca-47b2-bcc5-389f01bf5849\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.950413 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-bound-sa-token\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.972569 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8d67f2-74fc-4244-a62e-97fed3b28c79-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fh4pc\" (UID: \"7f8d67f2-74fc-4244-a62e-97fed3b28c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.978545 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.978690 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.478654446 +0000 UTC m=+142.407950384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.979175 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:43 crc kubenswrapper[4811]: E0216 20:58:43.979810 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.479792395 +0000 UTC m=+142.409088333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:43 crc kubenswrapper[4811]: I0216 20:58:43.990911 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27b16b4d-6b71-4eba-955c-2f33c6c73a9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wljzx\" (UID: \"27b16b4d-6b71-4eba-955c-2f33c6c73a9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.008892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxt2\" (UniqueName: \"kubernetes.io/projected/9e30dd8d-c885-4715-916c-2f87ff167589-kube-api-access-lmxt2\") pod \"machine-config-controller-84d6567774-7wh82\" (UID: \"9e30dd8d-c885-4715-916c-2f87ff167589\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.025111 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.033616 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52n6w\" (UniqueName: \"kubernetes.io/projected/a354d7fc-db3d-4d2b-bab5-973e5fb71d3e-kube-api-access-52n6w\") pod \"ingress-canary-ttddd\" (UID: \"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e\") " pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.034928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.051744 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwg4\" (UniqueName: \"kubernetes.io/projected/f2292d96-838d-4d2c-a325-bb2d7f2d2eda-kube-api-access-4mwg4\") pod \"etcd-operator-b45778765-9hxzk\" (UID: \"f2292d96-838d-4d2c-a325-bb2d7f2d2eda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.053613 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.067828 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.079218 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zzt\" (UniqueName: \"kubernetes.io/projected/d76477a2-14d0-4d86-b850-a980bf3ca21a-kube-api-access-85zzt\") pod \"machine-config-operator-74547568cd-8hwk8\" (UID: \"d76477a2-14d0-4d86-b850-a980bf3ca21a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.079765 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.080253 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.580237289 +0000 UTC m=+142.509533227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.085910 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mn795"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.086088 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ttddd" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.088260 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvwm\" (UniqueName: \"kubernetes.io/projected/b848efbd-79a2-4b6b-a42f-36f109a33e01-kube-api-access-6jvwm\") pod \"control-plane-machine-set-operator-78cbb6b69f-qnxsg\" (UID: \"b848efbd-79a2-4b6b-a42f-36f109a33e01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.091162 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.110017 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/45722898-287e-4a8e-8816-5928e178d2d7-kube-api-access-kqmxf\") pod \"marketplace-operator-79b997595-n8rd6\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.126118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbrvq\" (UniqueName: \"kubernetes.io/projected/7f1912bb-76f1-493c-b982-2a75e48cb649-kube-api-access-lbrvq\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.133809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cch5x"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.144569 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.157696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6hqw\" (UniqueName: \"kubernetes.io/projected/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-kube-api-access-v6hqw\") pod \"console-f9d7485db-8vgph\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.180989 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.181372 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.68135873 +0000 UTC m=+142.610654668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.183969 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhj5t\" (UniqueName: \"kubernetes.io/projected/2d817b52-21fc-40d9-a36f-487e6719ebfe-kube-api-access-hhj5t\") pod \"router-default-5444994796-lbxk8\" (UID: \"2d817b52-21fc-40d9-a36f-487e6719ebfe\") " pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.184368 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.193125 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.194946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ncs\" (UniqueName: \"kubernetes.io/projected/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-kube-api-access-j2ncs\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.230876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72ts\" (UniqueName: \"kubernetes.io/projected/eddedb9f-4d8f-467e-94a0-3e2b45746f42-kube-api-access-h72ts\") pod \"package-server-manager-789f6589d5-gdcrk\" (UID: \"eddedb9f-4d8f-467e-94a0-3e2b45746f42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.246720 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f1912bb-76f1-493c-b982-2a75e48cb649-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4km29\" (UID: \"7f1912bb-76f1-493c-b982-2a75e48cb649\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.249169 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/451b0c97-6ae1-4cb7-ac95-e4ecf08b0587-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ffr95\" (UID: \"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.281991 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xx2\" (UniqueName: \"kubernetes.io/projected/fae08180-6d56-48f5-99c6-d98b52eb0ccf-kube-api-access-c4xx2\") pod \"multus-admission-controller-857f4d67dd-kk27l\" (UID: \"fae08180-6d56-48f5-99c6-d98b52eb0ccf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.282675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.283109 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.783087017 +0000 UTC m=+142.712382955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.288456 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.302382 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.307317 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.316952 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.324515 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.328223 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zg85\" (UniqueName: \"kubernetes.io/projected/c699beb7-358c-424b-ab7e-cd1396bd8803-kube-api-access-6zg85\") pod \"migrator-59844c95c7-nl2ks\" (UID: \"c699beb7-358c-424b-ab7e-cd1396bd8803\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.347131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.347304 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.349549 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.354679 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386264 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnl8\" (UniqueName: \"kubernetes.io/projected/9f81a0a6-152c-48ca-8eec-eb9e330d3902-kube-api-access-hqnl8\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386332 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5952073-af7a-4268-b4e2-ad8e98b0e02a-metrics-tls\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386353 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386386 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86a38724-0aff-4a27-bebf-7eab7ffa24bc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386408 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82635068-e556-4de1-be36-160c60aed1d4-webhook-cert\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll5c9\" (UniqueName: \"kubernetes.io/projected/82635068-e556-4de1-be36-160c60aed1d4-kube-api-access-ll5c9\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386454 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b6e0a6-4e13-4393-a8c3-6820aeda2913-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ae95b88-69b4-470b-9551-6f6412d991ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386642 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386704 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12b6e0a6-4e13-4393-a8c3-6820aeda2913-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386728 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82635068-e556-4de1-be36-160c60aed1d4-apiservice-cert\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6-metrics-tls\") pod \"dns-operator-744455d44c-2cqpn\" (UID: \"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386770 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6d8\" (UniqueName: \"kubernetes.io/projected/9195c217-c5bc-4625-9b9c-2aa209485e3c-kube-api-access-9x6d8\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386806 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b9fba317-198f-48f6-b678-9a0df33df707-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386826 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9195c217-c5bc-4625-9b9c-2aa209485e3c-config-volume\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386844 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdp22\" (UniqueName: \"kubernetes.io/projected/86a38724-0aff-4a27-bebf-7eab7ffa24bc-kube-api-access-pdp22\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386867 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr728\" (UniqueName: \"kubernetes.io/projected/d5952073-af7a-4268-b4e2-ad8e98b0e02a-kube-api-access-lr728\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386883 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ae95b88-69b4-470b-9551-6f6412d991ac-srv-cert\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386901 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f81a0a6-152c-48ca-8eec-eb9e330d3902-config\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386917 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b6e0a6-4e13-4393-a8c3-6820aeda2913-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386933 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9fba317-198f-48f6-b678-9a0df33df707-serving-cert\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386956 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9195c217-c5bc-4625-9b9c-2aa209485e3c-secret-volume\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.386994 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/49885968-d644-4257-9e7c-7ed6bc875f9e-signing-cabundle\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387027 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8brb\" (UniqueName: \"kubernetes.io/projected/8ae95b88-69b4-470b-9551-6f6412d991ac-kube-api-access-x8brb\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g47q\" (UniqueName: \"kubernetes.io/projected/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-kube-api-access-8g47q\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387061 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/49885968-d644-4257-9e7c-7ed6bc875f9e-signing-key\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387090 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86a38724-0aff-4a27-bebf-7eab7ffa24bc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387105 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-srv-cert\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387129 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5952073-af7a-4268-b4e2-ad8e98b0e02a-config-volume\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387145 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86dqf\" (UniqueName: \"kubernetes.io/projected/d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6-kube-api-access-86dqf\") pod \"dns-operator-744455d44c-2cqpn\" (UID: \"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387159 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82635068-e556-4de1-be36-160c60aed1d4-tmpfs\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387174 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f81a0a6-152c-48ca-8eec-eb9e330d3902-serving-cert\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387208 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tf97\" (UniqueName: \"kubernetes.io/projected/b9fba317-198f-48f6-b678-9a0df33df707-kube-api-access-8tf97\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.387236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbnr\" (UniqueName: \"kubernetes.io/projected/49885968-d644-4257-9e7c-7ed6bc875f9e-kube-api-access-2hbnr\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.388443 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.888418975 +0000 UTC m=+142.817714913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.479669 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490076 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490369 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86a38724-0aff-4a27-bebf-7eab7ffa24bc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-srv-cert\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490427 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5952073-af7a-4268-b4e2-ad8e98b0e02a-config-volume\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490443 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82635068-e556-4de1-be36-160c60aed1d4-tmpfs\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490461 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86dqf\" (UniqueName: \"kubernetes.io/projected/d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6-kube-api-access-86dqf\") pod \"dns-operator-744455d44c-2cqpn\" (UID: \"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490482 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-csi-data-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f81a0a6-152c-48ca-8eec-eb9e330d3902-serving-cert\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tf97\" (UniqueName: \"kubernetes.io/projected/b9fba317-198f-48f6-b678-9a0df33df707-kube-api-access-8tf97\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbnr\" (UniqueName: \"kubernetes.io/projected/49885968-d644-4257-9e7c-7ed6bc875f9e-kube-api-access-2hbnr\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490649 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnl8\" (UniqueName: \"kubernetes.io/projected/9f81a0a6-152c-48ca-8eec-eb9e330d3902-kube-api-access-hqnl8\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490750 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41249bd4-022b-44a5-aea7-130e9ffa2117-certs\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86a38724-0aff-4a27-bebf-7eab7ffa24bc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490791 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5952073-af7a-4268-b4e2-ad8e98b0e02a-metrics-tls\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490833 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82635068-e556-4de1-be36-160c60aed1d4-webhook-cert\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490877 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll5c9\" (UniqueName: \"kubernetes.io/projected/82635068-e556-4de1-be36-160c60aed1d4-kube-api-access-ll5c9\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41249bd4-022b-44a5-aea7-130e9ffa2117-node-bootstrap-token\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490915 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b6e0a6-4e13-4393-a8c3-6820aeda2913-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490945 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ae95b88-69b4-470b-9551-6f6412d991ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490982 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-plugins-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.490997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12b6e0a6-4e13-4393-a8c3-6820aeda2913-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.491021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82635068-e556-4de1-be36-160c60aed1d4-apiservice-cert\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.491036 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-registration-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.491088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6-metrics-tls\") pod \"dns-operator-744455d44c-2cqpn\" (UID: \"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.491134 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhb74\" (UniqueName: \"kubernetes.io/projected/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-kube-api-access-fhb74\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.491176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6d8\" (UniqueName: \"kubernetes.io/projected/9195c217-c5bc-4625-9b9c-2aa209485e3c-kube-api-access-9x6d8\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.492946 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kgr\" (UniqueName: \"kubernetes.io/projected/41249bd4-022b-44a5-aea7-130e9ffa2117-kube-api-access-l7kgr\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b9fba317-198f-48f6-b678-9a0df33df707-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.493380 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:44.993361242 +0000 UTC m=+142.922657180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-mountpoint-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493432 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9195c217-c5bc-4625-9b9c-2aa209485e3c-config-volume\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493464 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdp22\" (UniqueName: \"kubernetes.io/projected/86a38724-0aff-4a27-bebf-7eab7ffa24bc-kube-api-access-pdp22\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr728\" (UniqueName: \"kubernetes.io/projected/d5952073-af7a-4268-b4e2-ad8e98b0e02a-kube-api-access-lr728\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ae95b88-69b4-470b-9551-6f6412d991ac-srv-cert\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493560 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f81a0a6-152c-48ca-8eec-eb9e330d3902-config\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.493588 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b6e0a6-4e13-4393-a8c3-6820aeda2913-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.495253 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82635068-e556-4de1-be36-160c60aed1d4-tmpfs\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.496050 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86a38724-0aff-4a27-bebf-7eab7ffa24bc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.497020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b9fba317-198f-48f6-b678-9a0df33df707-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.497502 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86a38724-0aff-4a27-bebf-7eab7ffa24bc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.499019 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f81a0a6-152c-48ca-8eec-eb9e330d3902-serving-cert\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.499732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5952073-af7a-4268-b4e2-ad8e98b0e02a-metrics-tls\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.500584 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5952073-af7a-4268-b4e2-ad8e98b0e02a-config-volume\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.502455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.502657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9fba317-198f-48f6-b678-9a0df33df707-serving-cert\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.502786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9195c217-c5bc-4625-9b9c-2aa209485e3c-secret-volume\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.502889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-socket-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.503259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/49885968-d644-4257-9e7c-7ed6bc875f9e-signing-cabundle\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.503385 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8brb\" (UniqueName: \"kubernetes.io/projected/8ae95b88-69b4-470b-9551-6f6412d991ac-kube-api-access-x8brb\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.503477 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g47q\" (UniqueName: \"kubernetes.io/projected/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-kube-api-access-8g47q\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.505321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/49885968-d644-4257-9e7c-7ed6bc875f9e-signing-key\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.506036 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82635068-e556-4de1-be36-160c60aed1d4-apiservice-cert\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.507115 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.508247 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9195c217-c5bc-4625-9b9c-2aa209485e3c-config-volume\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.509491 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6-metrics-tls\") pod \"dns-operator-744455d44c-2cqpn\" (UID: \"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.512580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b6e0a6-4e13-4393-a8c3-6820aeda2913-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.514853 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f81a0a6-152c-48ca-8eec-eb9e330d3902-config\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.519878 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/49885968-d644-4257-9e7c-7ed6bc875f9e-signing-cabundle\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.525078 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82635068-e556-4de1-be36-160c60aed1d4-webhook-cert\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.535937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-srv-cert\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.537885 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9fba317-198f-48f6-b678-9a0df33df707-serving-cert\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.539695 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9195c217-c5bc-4625-9b9c-2aa209485e3c-secret-volume\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.540521 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/49885968-d644-4257-9e7c-7ed6bc875f9e-signing-key\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.543425 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b6e0a6-4e13-4393-a8c3-6820aeda2913-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.547379 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ttddd"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.549469 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86dqf\" (UniqueName: \"kubernetes.io/projected/d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6-kube-api-access-86dqf\") pod \"dns-operator-744455d44c-2cqpn\" (UID: \"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6\") " pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.550889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ae95b88-69b4-470b-9551-6f6412d991ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.551845 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.547922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ae95b88-69b4-470b-9551-6f6412d991ac-srv-cert\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.579539 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.581360 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbnr\" (UniqueName: \"kubernetes.io/projected/49885968-d644-4257-9e7c-7ed6bc875f9e-kube-api-access-2hbnr\") pod \"service-ca-9c57cc56f-z67dz\" (UID: \"49885968-d644-4257-9e7c-7ed6bc875f9e\") " pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.587889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12b6e0a6-4e13-4393-a8c3-6820aeda2913-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbbzr\" (UID: \"12b6e0a6-4e13-4393-a8c3-6820aeda2913\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.587907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnl8\" (UniqueName: \"kubernetes.io/projected/9f81a0a6-152c-48ca-8eec-eb9e330d3902-kube-api-access-hqnl8\") pod \"service-ca-operator-777779d784-tm698\" (UID: \"9f81a0a6-152c-48ca-8eec-eb9e330d3902\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608013 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41249bd4-022b-44a5-aea7-130e9ffa2117-certs\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608626 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41249bd4-022b-44a5-aea7-130e9ffa2117-node-bootstrap-token\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-plugins-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608712 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-registration-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhb74\" (UniqueName: \"kubernetes.io/projected/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-kube-api-access-fhb74\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608772 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kgr\" (UniqueName: \"kubernetes.io/projected/41249bd4-022b-44a5-aea7-130e9ffa2117-kube-api-access-l7kgr\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608810 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-mountpoint-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608853 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-socket-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608875 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-plugins-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.609021 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-csi-data-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.608910 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-csi-data-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.609099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-mountpoint-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.609158 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-socket-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.609165 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.109143934 +0000 UTC m=+143.038439872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.609296 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-registration-dir\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.627922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41249bd4-022b-44a5-aea7-130e9ffa2117-certs\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.627923 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr728\" (UniqueName: \"kubernetes.io/projected/d5952073-af7a-4268-b4e2-ad8e98b0e02a-kube-api-access-lr728\") pod \"dns-default-nlp5w\" (UID: \"d5952073-af7a-4268-b4e2-ad8e98b0e02a\") " pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.628935 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41249bd4-022b-44a5-aea7-130e9ffa2117-node-bootstrap-token\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.636055 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdp22\" (UniqueName: \"kubernetes.io/projected/86a38724-0aff-4a27-bebf-7eab7ffa24bc-kube-api-access-pdp22\") pod \"kube-storage-version-migrator-operator-b67b599dd-6q4n5\" (UID: \"86a38724-0aff-4a27-bebf-7eab7ffa24bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.637700 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.646280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" event={"ID":"a956e785-7e90-41d8-97ea-d89664b3719a","Type":"ContainerStarted","Data":"e0b38235ea1ec1141011fb74bbfd0d03028ce1f5eb8bc51237beae787a68a567"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.646933 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.649460 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lbxk8" event={"ID":"2d817b52-21fc-40d9-a36f-487e6719ebfe","Type":"ContainerStarted","Data":"f55753ed1824867a70bc4fe6b6a74464b47cf4240b4a95ec981b0e78e9b83360"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.651391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6d8\" (UniqueName: \"kubernetes.io/projected/9195c217-c5bc-4625-9b9c-2aa209485e3c-kube-api-access-9x6d8\") pod \"collect-profiles-29521245-9xzcs\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.652342 4811 generic.go:334] "Generic (PLEG): container finished" podID="81a41d1f-0c1d-41cf-991b-f521c34bde80" containerID="32f17bec9f942147ead54e1dc484acfaf37a1ba5f2ff9d58594d85254766087b" exitCode=0 Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.652431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" event={"ID":"81a41d1f-0c1d-41cf-991b-f521c34bde80","Type":"ContainerDied","Data":"32f17bec9f942147ead54e1dc484acfaf37a1ba5f2ff9d58594d85254766087b"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.653504 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" event={"ID":"218883f2-cdcd-4b76-8f3c-dea0af40092c","Type":"ContainerStarted","Data":"3423f5958ce5204650b1c3ef9c167b8d1800bf7dfe3568a8236858c21d4c9ed9"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.655319 4811 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hxljc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.655372 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" podUID="a956e785-7e90-41d8-97ea-d89664b3719a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.657120 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" event={"ID":"4ce7d7ec-2a9d-4404-917a-da07f09d990d","Type":"ContainerStarted","Data":"acdcda95667ee5773ba2af824074110c54213d41d6cd7a89e54ffe1925d93b8f"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.659772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.663251 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mn795" event={"ID":"56ef6d7e-b0bf-4bfa-8426-68040e136fe1","Type":"ContainerStarted","Data":"1e246db7543f20f39e756fabafb53a4bcae5b31d1e38d02fb7a780544b931b70"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.663340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mn795" event={"ID":"56ef6d7e-b0bf-4bfa-8426-68040e136fe1","Type":"ContainerStarted","Data":"b8edfb8c7931735129f17a27d4a924548d53d6fce8a72b7563bd9dbba5e28e41"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.664615 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.666084 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-mn795 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.666124 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mn795" podUID="56ef6d7e-b0bf-4bfa-8426-68040e136fe1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.671643 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.674721 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" event={"ID":"e17f4635-2bd6-4ad1-b337-63c0e87ac247","Type":"ContainerStarted","Data":"16cc6369d95929998f9e7c7b260446afd8ba86598d733e45d2ba266d9fb63c17"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.675870 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tf97\" (UniqueName: \"kubernetes.io/projected/b9fba317-198f-48f6-b678-9a0df33df707-kube-api-access-8tf97\") pod \"openshift-config-operator-7777fb866f-ktp4v\" (UID: \"b9fba317-198f-48f6-b678-9a0df33df707\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.676379 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.682474 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.683728 4811 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zqrkd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.683794 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" podUID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.684111 4811 generic.go:334] "Generic (PLEG): container finished" podID="35fa6f12-cf55-48d7-82ef-4987071adff7" containerID="8734c12b162c641f9807d27be1e40832fc1ef1c67b589f2d642e5fee9a7dcb5b" exitCode=0 Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.684682 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" event={"ID":"35fa6f12-cf55-48d7-82ef-4987071adff7","Type":"ContainerDied","Data":"8734c12b162c641f9807d27be1e40832fc1ef1c67b589f2d642e5fee9a7dcb5b"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.692758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cch5x" event={"ID":"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca","Type":"ContainerStarted","Data":"4d5209e6a6a9e747dc2a75115153381861b57e91eae43868d2f3969c216c0b66"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.692807 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cch5x" event={"ID":"48e7d3f9-4e00-476b-90bc-9d238ef4f5ca","Type":"ContainerStarted","Data":"cb7227aa28feb832582657880d70e8722efcb8fc79b40f023dcd2e5f870d1f77"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.693740 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.694962 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.695999 4811 patch_prober.go:28] interesting pod/console-operator-58897d9998-cch5x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.696048 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-cch5x" podUID="48e7d3f9-4e00-476b-90bc-9d238ef4f5ca" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.696811 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" event={"ID":"90270354-a779-4378-8bca-c2ff51ecac2e","Type":"ContainerStarted","Data":"d646ff5dbc51a6731589287eb02cb97836209b423a293e13f292d613bd690b82"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.696862 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" event={"ID":"90270354-a779-4378-8bca-c2ff51ecac2e","Type":"ContainerStarted","Data":"abd50a2675a29aa160eff5858d051b64ab43e8b0a5592a1e4b39ebb952a79e8e"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.713489 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll5c9\" (UniqueName: \"kubernetes.io/projected/82635068-e556-4de1-be36-160c60aed1d4-kube-api-access-ll5c9\") pod \"packageserver-d55dfcdfc-bpcmh\" (UID: \"82635068-e556-4de1-be36-160c60aed1d4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.713613 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.714306 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.214286396 +0000 UTC m=+143.143582334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.738142 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g47q\" (UniqueName: \"kubernetes.io/projected/2b8eb903-1c76-4c64-b9ec-f33f22e756cf-kube-api-access-8g47q\") pod \"catalog-operator-68c6474976-9ckm2\" (UID: \"2b8eb903-1c76-4c64-b9ec-f33f22e756cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.740313 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4f8kg"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.740361 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" event={"ID":"b3a24eed-0751-45b0-945e-6351d15be4f6","Type":"ContainerStarted","Data":"c3828319a45d79d897ca747c9afbd08e147ed11f718b6e79829a190e63232132"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.740396 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" event={"ID":"b3a24eed-0751-45b0-945e-6351d15be4f6","Type":"ContainerStarted","Data":"fbc7012456f10e19f3e389fc9b181ad2b84c31406617082dbe05aaca845f100e"} Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.745860 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8brb\" (UniqueName: \"kubernetes.io/projected/8ae95b88-69b4-470b-9551-6f6412d991ac-kube-api-access-x8brb\") pod \"olm-operator-6b444d44fb-vx5rb\" (UID: \"8ae95b88-69b4-470b-9551-6f6412d991ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.755614 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zwtjs"] Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.792426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kgr\" (UniqueName: \"kubernetes.io/projected/41249bd4-022b-44a5-aea7-130e9ffa2117-kube-api-access-l7kgr\") pod \"machine-config-server-hbhzt\" (UID: \"41249bd4-022b-44a5-aea7-130e9ffa2117\") " pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.827190 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhb74\" (UniqueName: \"kubernetes.io/projected/eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1-kube-api-access-fhb74\") pod \"csi-hostpathplugin-mvkhm\" (UID: \"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1\") " pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.831343 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.841298 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.843918 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.343903607 +0000 UTC m=+143.273199545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.859918 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.867466 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.874983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.894553 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.933950 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:44 crc kubenswrapper[4811]: I0216 20:58:44.951173 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:44 crc kubenswrapper[4811]: E0216 20:58:44.951591 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.451573073 +0000 UTC m=+143.380869011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.001028 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hbhzt" Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.023993 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" Feb 16 20:58:45 crc kubenswrapper[4811]: W0216 20:58:45.040687 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda101d06e_8e7f_4fcf_9788_a54237068ad7.slice/crio-9b8ee33f9806a03d68cf5c558c0b38b22ffd1ba88427f8c92bbfe1c2c863f216 WatchSource:0}: Error finding container 9b8ee33f9806a03d68cf5c558c0b38b22ffd1ba88427f8c92bbfe1c2c863f216: Status 404 returned error can't find the container with id 9b8ee33f9806a03d68cf5c558c0b38b22ffd1ba88427f8c92bbfe1c2c863f216 Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.053754 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.054818 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.554799318 +0000 UTC m=+143.484095256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.154633 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.155055 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.655038237 +0000 UTC m=+143.584334175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.256941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.257820 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.7578045 +0000 UTC m=+143.687100438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.363294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.363802 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.863785574 +0000 UTC m=+143.793081522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.465097 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.465425 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:45.965413148 +0000 UTC m=+143.894709076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.565855 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.566721 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.066703213 +0000 UTC m=+143.995999151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.668508 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.668963 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.168943713 +0000 UTC m=+144.098239651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.752098 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" event={"ID":"a101d06e-8e7f-4fcf-9788-a54237068ad7","Type":"ContainerStarted","Data":"9b8ee33f9806a03d68cf5c558c0b38b22ffd1ba88427f8c92bbfe1c2c863f216"} Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.771711 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lbxk8" event={"ID":"2d817b52-21fc-40d9-a36f-487e6719ebfe","Type":"ContainerStarted","Data":"82314c3b64649b66561252c40298984050a2cda534f274cfc57fa10659a49661"} Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.773535 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.774801 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.274776943 +0000 UTC m=+144.204072881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.838924 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" event={"ID":"7ff60cdb-3618-4902-a679-e5bda29c5c60","Type":"ContainerStarted","Data":"62928bbecb86564d117b12f09538af4aad8164c5be93b1a33e7d4271a1d27eee"} Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.876650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.878372 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.378358937 +0000 UTC m=+144.307654875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.893404 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" event={"ID":"218883f2-cdcd-4b76-8f3c-dea0af40092c","Type":"ContainerStarted","Data":"095e8077fc811b83e17cecdfc1c6409ef51469dc6043c9d9aca0a2dc69a04a9f"} Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.928162 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks"] Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.978396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.978824 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.478790681 +0000 UTC m=+144.408086619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4811]: I0216 20:58:45.979095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:45 crc kubenswrapper[4811]: E0216 20:58:45.979601 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.479594251 +0000 UTC m=+144.408890189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.008134 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.010673 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ttddd" event={"ID":"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e","Type":"ContainerStarted","Data":"65cb778c315ad88a4fb0c69ca8d8c8d62571407b0d94694a153ba0669f65946c"} Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.010717 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ttddd" event={"ID":"a354d7fc-db3d-4d2b-bab5-973e5fb71d3e","Type":"ContainerStarted","Data":"3a5f8d1e2870466550ff9b012ea24d0f7148f12b72059898fc338909dc557121"} Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.028734 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" event={"ID":"79f24eee-94ca-47b2-bcc5-389f01bf5849","Type":"ContainerStarted","Data":"33e29e6dd3818b067d81c87ff088dbfef877ed4d906a4d236475ed585ce356a4"} Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.037350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hbhzt" event={"ID":"41249bd4-022b-44a5-aea7-130e9ffa2117","Type":"ContainerStarted","Data":"e407ae165e681e45c4b1fe2810ea9d4527a0868f3ab1edfe2478ef405f55fcd2"} Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.039756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" event={"ID":"b3a24eed-0751-45b0-945e-6351d15be4f6","Type":"ContainerStarted","Data":"9bf07bbc9390b7a748caebfc4a55befc702ef5852cc601e911958d03757a4898"} Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.042137 4811 patch_prober.go:28] interesting pod/console-operator-58897d9998-cch5x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.042228 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-cch5x" podUID="48e7d3f9-4e00-476b-90bc-9d238ef4f5ca" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.043624 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-mn795 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.043647 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mn795" podUID="56ef6d7e-b0bf-4bfa-8426-68040e136fe1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.065493 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.067140 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.069088 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.080126 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.081802 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.581785199 +0000 UTC m=+144.511081137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.095263 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8vgph"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.190454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.191011 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.690989635 +0000 UTC m=+144.620285583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.299269 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.300163 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.800123858 +0000 UTC m=+144.729419796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.319376 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.348818 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9hxzk"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.355827 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:46 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:46 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:46 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.355899 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.404538 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.405285 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.405717 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.905699902 +0000 UTC m=+144.834995840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.414111 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.469478 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" podStartSLOduration=123.46944295 podStartE2EDuration="2m3.46944295s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.438618152 +0000 UTC m=+144.367914080" watchObservedRunningTime="2026-02-16 20:58:46.46944295 +0000 UTC m=+144.398738888" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.524939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.525140 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.025101044 +0000 UTC m=+144.954396992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.526475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.545298 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.045274293 +0000 UTC m=+144.974570231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.554464 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.557670 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lbxk8" podStartSLOduration=123.557653286 podStartE2EDuration="2m3.557653286s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.480255193 +0000 UTC m=+144.409551131" watchObservedRunningTime="2026-02-16 20:58:46.557653286 +0000 UTC m=+144.486949224" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.567452 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.600251 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-cch5x" podStartSLOduration=123.598681091 podStartE2EDuration="2m3.598681091s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.514111497 +0000 UTC m=+144.443407455" watchObservedRunningTime="2026-02-16 20:58:46.598681091 +0000 UTC m=+144.527977029" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.616787 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.626064 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gx777" podStartSLOduration=123.618649085 podStartE2EDuration="2m3.618649085s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.547961501 +0000 UTC m=+144.477257439" watchObservedRunningTime="2026-02-16 20:58:46.618649085 +0000 UTC m=+144.547945023" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.644016 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.645500 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ttddd" podStartSLOduration=5.645476062 podStartE2EDuration="5.645476062s" podCreationTimestamp="2026-02-16 20:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.574164862 +0000 UTC m=+144.503460800" watchObservedRunningTime="2026-02-16 20:58:46.645476062 +0000 UTC m=+144.574772000" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.647908 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.648329 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.148303563 +0000 UTC m=+145.077599501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.672308 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" podStartSLOduration=122.672269968 podStartE2EDuration="2m2.672269968s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.59588248 +0000 UTC m=+144.525178408" watchObservedRunningTime="2026-02-16 20:58:46.672269968 +0000 UTC m=+144.601565906" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.679952 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tm698"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.680308 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rpx95" podStartSLOduration=123.68028522 podStartE2EDuration="2m3.68028522s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.642601829 +0000 UTC m=+144.571897767" watchObservedRunningTime="2026-02-16 20:58:46.68028522 +0000 UTC m=+144.609581158" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.758976 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.759429 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.259408946 +0000 UTC m=+145.188704904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.765682 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2cqpn"] Feb 16 20:58:46 crc kubenswrapper[4811]: W0216 20:58:46.778978 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d5d20c_9da0_4bf1_9f57_d3b96c8736e6.slice/crio-f4fb701e32872653f70e8baf7884b9bbfd487cd6ef119a8f02ce833958e3ee4b WatchSource:0}: Error finding container f4fb701e32872653f70e8baf7884b9bbfd487cd6ef119a8f02ce833958e3ee4b: Status 404 returned error can't find the container with id f4fb701e32872653f70e8baf7884b9bbfd487cd6ef119a8f02ce833958e3ee4b Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.829340 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g82fx" podStartSLOduration=123.82931516 podStartE2EDuration="2m3.82931516s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.795900207 +0000 UTC m=+144.725196145" watchObservedRunningTime="2026-02-16 20:58:46.82931516 +0000 UTC m=+144.758611098" Feb 16 20:58:46 crc kubenswrapper[4811]: W0216 20:58:46.842628 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f81a0a6_152c_48ca_8eec_eb9e330d3902.slice/crio-05c954340ff5374bb54003deef9e25d0404692ecfcc8c2ad3a04382b1184fd64 WatchSource:0}: Error finding container 05c954340ff5374bb54003deef9e25d0404692ecfcc8c2ad3a04382b1184fd64: Status 404 returned error can't find the container with id 05c954340ff5374bb54003deef9e25d0404692ecfcc8c2ad3a04382b1184fd64 Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.860440 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.860939 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.360917547 +0000 UTC m=+145.290213485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.885515 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mn795" podStartSLOduration=123.885485557 podStartE2EDuration="2m3.885485557s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:46.828946771 +0000 UTC m=+144.758242709" watchObservedRunningTime="2026-02-16 20:58:46.885485557 +0000 UTC m=+144.814781495" Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.888847 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kk27l"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.892724 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n8rd6"] Feb 16 20:58:46 crc kubenswrapper[4811]: I0216 20:58:46.963842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:46 crc kubenswrapper[4811]: E0216 20:58:46.964229 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.464213403 +0000 UTC m=+145.393509341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4811]: W0216 20:58:46.965016 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45722898_287e_4a8e_8816_5928e178d2d7.slice/crio-a353a9621f9f9993210bbd91ac6db505905884818960ecd71200c85eeeb4a3b8 WatchSource:0}: Error finding container a353a9621f9f9993210bbd91ac6db505905884818960ecd71200c85eeeb4a3b8: Status 404 returned error can't find the container with id a353a9621f9f9993210bbd91ac6db505905884818960ecd71200c85eeeb4a3b8 Feb 16 20:58:46 crc kubenswrapper[4811]: W0216 20:58:46.988803 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfae08180_6d56_48f5_99c6_d98b52eb0ccf.slice/crio-65a1626efb8088139cc0d60284310c1eafcdc162d958bfc994c209865b84cdf0 WatchSource:0}: Error finding container 65a1626efb8088139cc0d60284310c1eafcdc162d958bfc994c209865b84cdf0: Status 404 returned error can't find the container with id 65a1626efb8088139cc0d60284310c1eafcdc162d958bfc994c209865b84cdf0 Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.010599 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z67dz"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.068123 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.068633 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.568594157 +0000 UTC m=+145.497890095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.073981 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4km29"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.102509 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" event={"ID":"81a41d1f-0c1d-41cf-991b-f521c34bde80","Type":"ContainerStarted","Data":"9df2e9f00d0a0601cda75eb8b849a6a83184fb579117c3d336d1b0c3e7c78def"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.104561 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" event={"ID":"fae08180-6d56-48f5-99c6-d98b52eb0ccf","Type":"ContainerStarted","Data":"65a1626efb8088139cc0d60284310c1eafcdc162d958bfc994c209865b84cdf0"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.105734 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" event={"ID":"eddedb9f-4d8f-467e-94a0-3e2b45746f42","Type":"ContainerStarted","Data":"d97c0f7125abd4358e0f7d2f0fb3463f383c8f55cece906512a7ac77d7a8ee52"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.109533 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" event={"ID":"a101d06e-8e7f-4fcf-9788-a54237068ad7","Type":"ContainerStarted","Data":"8e15c60d438e60deeec77c0e9d4a7aadaa180087bb6e88b2e0fecadcb1bbd997"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.117268 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" event={"ID":"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587","Type":"ContainerStarted","Data":"e40a027cbbe76f22c05aa59aeb6f6d7a96924d3d3ef2ac9e6865aa9127dc3570"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.132476 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" event={"ID":"f2292d96-838d-4d2c-a325-bb2d7f2d2eda","Type":"ContainerStarted","Data":"0b1ee59584a2e90e1fe6300e31f22f093d588dab4262351e4aabb262a7430d2e"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.144449 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zwtjs" podStartSLOduration=124.14443052 podStartE2EDuration="2m4.14443052s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.143345633 +0000 UTC m=+145.072641571" watchObservedRunningTime="2026-02-16 20:58:47.14443052 +0000 UTC m=+145.073726458" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.146695 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" event={"ID":"7f8d67f2-74fc-4244-a62e-97fed3b28c79","Type":"ContainerStarted","Data":"bff9b5bba2962841cc507a1951f05fdc182376b53e343fb16dbf22f8e19ebff0"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.186525 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.187090 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.687073046 +0000 UTC m=+145.616368984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.187812 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nlp5w"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.195999 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" event={"ID":"218883f2-cdcd-4b76-8f3c-dea0af40092c","Type":"ContainerStarted","Data":"55eaef1411d1ab9fa28f23a6595354cdcba7e463f97569f040e493e5554c2902"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.215763 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" event={"ID":"35fa6f12-cf55-48d7-82ef-4987071adff7","Type":"ContainerStarted","Data":"6ea1a5c3ff8f33702f78c25f4958e42d01d6a3889124aabcf6fecf5673f2f018"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.226305 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" podStartSLOduration=124.226273665 podStartE2EDuration="2m4.226273665s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.225773183 +0000 UTC m=+145.155069121" watchObservedRunningTime="2026-02-16 20:58:47.226273665 +0000 UTC m=+145.155569603" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.264839 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" event={"ID":"79f24eee-94ca-47b2-bcc5-389f01bf5849","Type":"ContainerStarted","Data":"f3bf4ec93458632c11d464bdb0de39c88b1614a87976b8bc7baa00f3ba91b789"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.281217 4811 csr.go:261] certificate signing request csr-gbzwh is approved, waiting to be issued Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.310962 4811 csr.go:257] certificate signing request csr-gbzwh is issued Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.313998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" event={"ID":"d76477a2-14d0-4d86-b850-a980bf3ca21a","Type":"ContainerStarted","Data":"b83442b62a4b70d2b8ddd00a5cb2605dc7e14887520177da7c3e40f61e1d565f"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.314542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.315309 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" podStartSLOduration=124.315300012 podStartE2EDuration="2m4.315300012s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.313597299 +0000 UTC m=+145.242893237" watchObservedRunningTime="2026-02-16 20:58:47.315300012 +0000 UTC m=+145.244595950" Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.315660 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.81564684 +0000 UTC m=+145.744942778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.322603 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" event={"ID":"9f81a0a6-152c-48ca-8eec-eb9e330d3902","Type":"ContainerStarted","Data":"05c954340ff5374bb54003deef9e25d0404692ecfcc8c2ad3a04382b1184fd64"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.324969 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:47 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:47 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:47 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.325002 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.326263 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" event={"ID":"27b16b4d-6b71-4eba-955c-2f33c6c73a9d","Type":"ContainerStarted","Data":"de67a656e90d6ab80ddca71d23f84aa533b11cb8fb7d56f9907f711881558420"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.328422 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" event={"ID":"9e30dd8d-c885-4715-916c-2f87ff167589","Type":"ContainerStarted","Data":"5fe8a4f11cd0b59c258ff86a13d1aebb98b0c3ae2c500c3f3d9de602c75b4ffe"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.347352 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8vgph" event={"ID":"a00560cb-dc2f-489d-a2b1-aaecee43f0d3","Type":"ContainerStarted","Data":"a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.347398 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8vgph" event={"ID":"a00560cb-dc2f-489d-a2b1-aaecee43f0d3","Type":"ContainerStarted","Data":"27a51f336ab8049cddf086ec6d61a98e3a2f5499f0930dc58b03a52e58554aa9"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.362366 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.366248 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" event={"ID":"7ff60cdb-3618-4902-a679-e5bda29c5c60","Type":"ContainerStarted","Data":"e644a6a1dbfddeb229d12302f02b8cc0427e7431df471d0aeccd1d9e7939e4f6"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.367170 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.376876 4811 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4f8kg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.377321 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" podUID="7ff60cdb-3618-4902-a679-e5bda29c5c60" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.382400 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.389087 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" event={"ID":"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6","Type":"ContainerStarted","Data":"f4fb701e32872653f70e8baf7884b9bbfd487cd6ef119a8f02ce833958e3ee4b"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.397274 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvkhm"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.416748 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.417397 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hbhzt" event={"ID":"41249bd4-022b-44a5-aea7-130e9ffa2117","Type":"ContainerStarted","Data":"95416adea27d6688e683db2958ae196092d30d86934f9fdcbda743d36776f70b"} Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.420331 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.920318401 +0000 UTC m=+145.849614339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.421113 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgtxl" podStartSLOduration=124.421083571 podStartE2EDuration="2m4.421083571s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.330857954 +0000 UTC m=+145.260153892" watchObservedRunningTime="2026-02-16 20:58:47.421083571 +0000 UTC m=+145.350379519" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.422249 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb"] Feb 16 20:58:47 crc kubenswrapper[4811]: W0216 20:58:47.429610 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9fba317_198f_48f6_b678_9a0df33df707.slice/crio-96b0d6e315b1d563c8975af1e84ad313d6332323eec8198349b6b636794cb059 WatchSource:0}: Error finding container 96b0d6e315b1d563c8975af1e84ad313d6332323eec8198349b6b636794cb059: Status 404 returned error can't find the container with id 96b0d6e315b1d563c8975af1e84ad313d6332323eec8198349b6b636794cb059 Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.452539 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.453798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" event={"ID":"b848efbd-79a2-4b6b-a42f-36f109a33e01","Type":"ContainerStarted","Data":"458a30b3a38703e37d87c8f2b7a0d05d4debb094251f329363232168c4967b11"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.455980 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" podStartSLOduration=124.45594415 podStartE2EDuration="2m4.45594415s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.38260883 +0000 UTC m=+145.311904758" watchObservedRunningTime="2026-02-16 20:58:47.45594415 +0000 UTC m=+145.385240088" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.480322 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8vgph" podStartSLOduration=124.480298645 podStartE2EDuration="2m4.480298645s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.417175792 +0000 UTC m=+145.346471720" watchObservedRunningTime="2026-02-16 20:58:47.480298645 +0000 UTC m=+145.409594583" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.480843 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.488124 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" podStartSLOduration=124.488098841 podStartE2EDuration="2m4.488098841s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.453357865 +0000 UTC m=+145.382653803" watchObservedRunningTime="2026-02-16 20:58:47.488098841 +0000 UTC m=+145.417394779" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.510035 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hbhzt" podStartSLOduration=6.510011203 podStartE2EDuration="6.510011203s" podCreationTimestamp="2026-02-16 20:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.496828862 +0000 UTC m=+145.426124800" watchObservedRunningTime="2026-02-16 20:58:47.510011203 +0000 UTC m=+145.439307141" Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.511145 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2"] Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.517885 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.519491 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.019471532 +0000 UTC m=+145.948767470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: W0216 20:58:47.536427 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef7166a_3f42_481e_9ecd_b8a6a8bc4cc1.slice/crio-03d2226481de05106bada5269e3884f54526f27cbe941461b36abb9931fc2740 WatchSource:0}: Error finding container 03d2226481de05106bada5269e3884f54526f27cbe941461b36abb9931fc2740: Status 404 returned error can't find the container with id 03d2226481de05106bada5269e3884f54526f27cbe941461b36abb9931fc2740 Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.540674 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" event={"ID":"45722898-287e-4a8e-8816-5928e178d2d7","Type":"ContainerStarted","Data":"a353a9621f9f9993210bbd91ac6db505905884818960ecd71200c85eeeb4a3b8"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.548132 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" event={"ID":"9195c217-c5bc-4625-9b9c-2aa209485e3c","Type":"ContainerStarted","Data":"d52f5f3134d1206254db713cc9562a044fd45782b24e18e9176ce2a1a3531616"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.568112 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" event={"ID":"c699beb7-358c-424b-ab7e-cd1396bd8803","Type":"ContainerStarted","Data":"4e5cc75592c589c34912b872bd27c0487b8ed58d5b1654da4393352aa5a8c583"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.568171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" event={"ID":"c699beb7-358c-424b-ab7e-cd1396bd8803","Type":"ContainerStarted","Data":"d8c146f84b3689196720bed27a314df8b6c3de23ec4561a337f95c2ebbe6e6cd"} Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.620522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.620962 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.120946932 +0000 UTC m=+146.050242860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.729499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.731177 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.231157523 +0000 UTC m=+146.160453461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.839383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.839776 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.339765403 +0000 UTC m=+146.269061341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4811]: I0216 20:58:47.940963 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4811]: E0216 20:58:47.941927 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.4419109 +0000 UTC m=+146.371206838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.043601 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.044421 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.544408066 +0000 UTC m=+146.473704004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.110629 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.110716 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.145187 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.145333 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.645302332 +0000 UTC m=+146.574598270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.145440 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.145828 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.645820025 +0000 UTC m=+146.575115963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.248499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.249431 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.749408238 +0000 UTC m=+146.678704176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.312390 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 20:53:47 +0000 UTC, rotation deadline is 2026-11-13 11:05:22.869274988 +0000 UTC Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.312432 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6470h6m34.55684559s for next certificate rotation Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.329560 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:48 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:48 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:48 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.329630 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.370972 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.371434 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.871413427 +0000 UTC m=+146.800709355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.374681 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.374768 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.475799 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.476373 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.976357514 +0000 UTC m=+146.905653452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.584182 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.586617 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.587088 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.087071668 +0000 UTC m=+147.016367606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.607649 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" event={"ID":"2b8eb903-1c76-4c64-b9ec-f33f22e756cf","Type":"ContainerStarted","Data":"bae987d61be0fbce51964674d71c08be265e2d9a31a058ff6248d0fe06ec0ff5"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.625601 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" event={"ID":"45722898-287e-4a8e-8816-5928e178d2d7","Type":"ContainerStarted","Data":"cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.626971 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.636881 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n8rd6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.636945 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.638493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" event={"ID":"12b6e0a6-4e13-4393-a8c3-6820aeda2913","Type":"ContainerStarted","Data":"119fc11bfeddc652078dfa5d31a8eb632f26cb136991149d1bf282db88c35646"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.652167 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wljzx" event={"ID":"27b16b4d-6b71-4eba-955c-2f33c6c73a9d","Type":"ContainerStarted","Data":"169c7a03ed2daf24d98d4c2c570e0cb3ec8e38afc79827ed39a2c321dd3b95e8"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.676651 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" event={"ID":"49885968-d644-4257-9e7c-7ed6bc875f9e","Type":"ContainerStarted","Data":"bc2fd261c4866fa8f848bbdc28d3e264f804916f07116e680203e11b97e49b17"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.676717 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" event={"ID":"49885968-d644-4257-9e7c-7ed6bc875f9e","Type":"ContainerStarted","Data":"c0583bb9cac46989f7c4436060b25bafbd6457bff6eff09d03ee9423b01e3705"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.688230 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.690754 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.190733803 +0000 UTC m=+147.120029741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.695675 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" event={"ID":"9e30dd8d-c885-4715-916c-2f87ff167589","Type":"ContainerStarted","Data":"86f8be2c21f8c5b2629b809b506e37d6e2a6adaca9c7c723c3c6f1038604f40c"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.737153 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" podStartSLOduration=125.737131954 podStartE2EDuration="2m5.737131954s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.698729055 +0000 UTC m=+146.628024993" watchObservedRunningTime="2026-02-16 20:58:48.737131954 +0000 UTC m=+146.666427892" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.738804 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z67dz" podStartSLOduration=124.738800126 podStartE2EDuration="2m4.738800126s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.738129279 +0000 UTC m=+146.667425217" watchObservedRunningTime="2026-02-16 20:58:48.738800126 +0000 UTC m=+146.668096064" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.764277 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.764320 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" event={"ID":"451b0c97-6ae1-4cb7-ac95-e4ecf08b0587","Type":"ContainerStarted","Data":"c730ece0dbfb31578e7463c208075409e038b62ab4cd0e8de68cd2f96bd2976d"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.764342 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" event={"ID":"eddedb9f-4d8f-467e-94a0-3e2b45746f42","Type":"ContainerStarted","Data":"e70d5e11a757fc8aa6f107f66df0f2ed7c72b6d9a3da6c270d4446b5ba9707b6"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.770092 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" event={"ID":"81a41d1f-0c1d-41cf-991b-f521c34bde80","Type":"ContainerStarted","Data":"af544659c66d99de79a3936c5848fb5768dcf252b52f8fa58cbd44c1a7ead1cd"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.793235 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" event={"ID":"86a38724-0aff-4a27-bebf-7eab7ffa24bc","Type":"ContainerStarted","Data":"224275b20ab5ecdf92c812cfb80cc2623d17c316f5680b22650ecfb774902af7"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.793293 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" event={"ID":"86a38724-0aff-4a27-bebf-7eab7ffa24bc","Type":"ContainerStarted","Data":"b6417ed94d5d3c1953cb877347df2889d2ab55e3717893cd3e03e1b7e0d663da"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.793569 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.795187 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.295172568 +0000 UTC m=+147.224468506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.810303 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffr95" podStartSLOduration=125.810275039 podStartE2EDuration="2m5.810275039s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.778282542 +0000 UTC m=+146.707578480" watchObservedRunningTime="2026-02-16 20:58:48.810275039 +0000 UTC m=+146.739570977" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.821762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" event={"ID":"8ae95b88-69b4-470b-9551-6f6412d991ac","Type":"ContainerStarted","Data":"fa06466e5166f875f5a794faafb31e1fc7baec6d4909d424f7484a101fc36eb3"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.831639 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" event={"ID":"b848efbd-79a2-4b6b-a42f-36f109a33e01","Type":"ContainerStarted","Data":"2a6826b7e4b6470189cc676d564181d959bd388a557dc807cfd661fedc3ed846"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.852412 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" podStartSLOduration=125.852392672 podStartE2EDuration="2m5.852392672s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.810799223 +0000 UTC m=+146.740095161" watchObservedRunningTime="2026-02-16 20:58:48.852392672 +0000 UTC m=+146.781688620" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.852775 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" podStartSLOduration=125.852766651 podStartE2EDuration="2m5.852766651s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.850499304 +0000 UTC m=+146.779795272" watchObservedRunningTime="2026-02-16 20:58:48.852766651 +0000 UTC m=+146.782062589" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.872530 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" event={"ID":"d76477a2-14d0-4d86-b850-a980bf3ca21a","Type":"ContainerStarted","Data":"4f072e946b572d47d3b989a3ddbe42823cf3e5ee206aebd4c2cd03a9c20e5e46"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.896239 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4811]: E0216 20:58:48.898306 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.39828185 +0000 UTC m=+147.327577788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.915838 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6q4n5" podStartSLOduration=125.915814302 podStartE2EDuration="2m5.915814302s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.892824522 +0000 UTC m=+146.822120460" watchObservedRunningTime="2026-02-16 20:58:48.915814302 +0000 UTC m=+146.845110240" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.917223 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qnxsg" podStartSLOduration=125.917218497 podStartE2EDuration="2m5.917218497s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.916416487 +0000 UTC m=+146.845712425" watchObservedRunningTime="2026-02-16 20:58:48.917218497 +0000 UTC m=+146.846514435" Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.931387 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nlp5w" event={"ID":"d5952073-af7a-4268-b4e2-ad8e98b0e02a","Type":"ContainerStarted","Data":"79db30fc744ed8e92dd742af9955c6830b288e6efd3f26cee6316b0fca579eba"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.956095 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" event={"ID":"f2292d96-838d-4d2c-a325-bb2d7f2d2eda","Type":"ContainerStarted","Data":"c17cfd36e308ad4d265fd28178c023e54c40a0cfcadfb8f701a1b7935f445b67"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.974046 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" event={"ID":"7f1912bb-76f1-493c-b982-2a75e48cb649","Type":"ContainerStarted","Data":"d3630cc486cc97f98b0515f1db4582653943d628d7701d7d860c20ab35921e93"} Feb 16 20:58:48 crc kubenswrapper[4811]: I0216 20:58:48.992168 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" event={"ID":"b9fba317-198f-48f6-b678-9a0df33df707","Type":"ContainerStarted","Data":"96b0d6e315b1d563c8975af1e84ad313d6332323eec8198349b6b636794cb059"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.006346 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.025985 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.525957771 +0000 UTC m=+147.455253709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.043840 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" event={"ID":"82635068-e556-4de1-be36-160c60aed1d4","Type":"ContainerStarted","Data":"7adccbfab93e96cf2ad918a67d271d7ca8af31c40f70962208fd403154c5e44b"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.054024 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" event={"ID":"9195c217-c5bc-4625-9b9c-2aa209485e3c","Type":"ContainerStarted","Data":"32273f40296cb069a1d2afbb7235ee062b126f01eea8680d182bc4166a644b08"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.055921 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" event={"ID":"c699beb7-358c-424b-ab7e-cd1396bd8803","Type":"ContainerStarted","Data":"c3870ec58fa614914ed83ca7f0480ab7125bb422cf01e20f8bfc7c14bf2fa8d8"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.089990 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-9hxzk" podStartSLOduration=126.089965706 podStartE2EDuration="2m6.089965706s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.005258659 +0000 UTC m=+146.934554597" watchObservedRunningTime="2026-02-16 20:58:49.089965706 +0000 UTC m=+147.019261644" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.096423 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" event={"ID":"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1","Type":"ContainerStarted","Data":"03d2226481de05106bada5269e3884f54526f27cbe941461b36abb9931fc2740"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.107786 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.109676 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.609656683 +0000 UTC m=+147.538952621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.141437 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" event={"ID":"7f8d67f2-74fc-4244-a62e-97fed3b28c79","Type":"ContainerStarted","Data":"d5e1e27b970d6bad514181b9f1477093e09916a60d3718af7c3f5c7331cea85d"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.175760 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" podStartSLOduration=126.17574395 podStartE2EDuration="2m6.17574395s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.144744978 +0000 UTC m=+147.074040906" watchObservedRunningTime="2026-02-16 20:58:49.17574395 +0000 UTC m=+147.105039888" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.176121 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nl2ks" podStartSLOduration=126.17611559 podStartE2EDuration="2m6.17611559s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.174575871 +0000 UTC m=+147.103871819" watchObservedRunningTime="2026-02-16 20:58:49.17611559 +0000 UTC m=+147.105411538" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.201114 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" event={"ID":"9f81a0a6-152c-48ca-8eec-eb9e330d3902","Type":"ContainerStarted","Data":"718e79a16969e5008ea0ebe5e274c3d8d9d4bd6d73f0366d6cab38ae36c0441b"} Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.230484 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.231860 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.731841846 +0000 UTC m=+147.661137784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.238733 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fv7gf" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.248002 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.259614 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fh4pc" podStartSLOduration=126.259588856 podStartE2EDuration="2m6.259588856s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.237361065 +0000 UTC m=+147.166657003" watchObservedRunningTime="2026-02-16 20:58:49.259588856 +0000 UTC m=+147.188884794" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.317093 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tm698" podStartSLOduration=125.317067176 podStartE2EDuration="2m5.317067176s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.313008613 +0000 UTC m=+147.242304551" watchObservedRunningTime="2026-02-16 20:58:49.317067176 +0000 UTC m=+147.246363114" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.334565 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.334854 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.834833994 +0000 UTC m=+147.764129932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.335219 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.337688 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.837678276 +0000 UTC m=+147.766974204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.342446 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:49 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:49 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:49 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.342510 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.439960 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.440379 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.940342036 +0000 UTC m=+147.869637974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.440999 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.441445 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.941428404 +0000 UTC m=+147.870724332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.548911 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.549284 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.049267835 +0000 UTC m=+147.978563773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.651181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.651667 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.651694 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.651736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.652633 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.152620603 +0000 UTC m=+148.081916541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.654675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.664335 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.670082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.754239 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.754361 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.254322359 +0000 UTC m=+148.183618297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.754560 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.754650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.755171 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.25514983 +0000 UTC m=+148.184445768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.762993 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.830861 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.842780 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.855754 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.855999 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.355957043 +0000 UTC m=+148.285252981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.856186 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.856519 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.356504367 +0000 UTC m=+148.285800305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.941155 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.961821 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.961966 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.461948197 +0000 UTC m=+148.391244135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4811]: I0216 20:58:49.962179 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:49 crc kubenswrapper[4811]: E0216 20:58:49.962479 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.462471371 +0000 UTC m=+148.391767309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.063242 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.063699 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.563675344 +0000 UTC m=+148.492971282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.165346 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.165788 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.66577204 +0000 UTC m=+148.595067978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.269367 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.270322 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.770302758 +0000 UTC m=+148.699598696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.332737 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:50 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:50 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:50 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.332815 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.335748 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" event={"ID":"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6","Type":"ContainerStarted","Data":"e1cd39633261482ec8f62e50a513561f6e3e66a28809c36d9bec4b4fcb0d0d52"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.335793 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" event={"ID":"d8d5d20c-9da0-4bf1-9f57-d3b96c8736e6","Type":"ContainerStarted","Data":"9e42dbfc92dae0d16fbd53b5e67c119cb030a6736b5e389c56bba3121bb0bcbd"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.362278 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" event={"ID":"12b6e0a6-4e13-4393-a8c3-6820aeda2913","Type":"ContainerStarted","Data":"dee638b4f378cfed3928c67efda17a835db2c3d00dfc683338ad2683bf435458"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.374783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.376294 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.876281392 +0000 UTC m=+148.805577320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.388592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" event={"ID":"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1","Type":"ContainerStarted","Data":"07ff8893c1b8c380158b2c5803e9fb22a9478e46cd42d858c2e69bfddafb2346"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.405023 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2cqpn" podStartSLOduration=127.404999406 podStartE2EDuration="2m7.404999406s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.373600164 +0000 UTC m=+148.302896112" watchObservedRunningTime="2026-02-16 20:58:50.404999406 +0000 UTC m=+148.334295354" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.412808 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" event={"ID":"fae08180-6d56-48f5-99c6-d98b52eb0ccf","Type":"ContainerStarted","Data":"52ec349a470e409423fef9ace4d5b9af97e05a4bad757ce5b80152e503d2beff"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.412875 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" event={"ID":"fae08180-6d56-48f5-99c6-d98b52eb0ccf","Type":"ContainerStarted","Data":"bf9dc9bbc5e41d024dddbfe69be45a5e33cefa9346b8ddba19c37ad3aa23fa23"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.449497 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" event={"ID":"9e30dd8d-c885-4715-916c-2f87ff167589","Type":"ContainerStarted","Data":"837964f91b4e5b3ca6d393c6439fffa3c6ed0576d855e747f02b79cdc14459c9"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.456899 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbbzr" podStartSLOduration=127.456877785 podStartE2EDuration="2m7.456877785s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.406451492 +0000 UTC m=+148.335747430" watchObservedRunningTime="2026-02-16 20:58:50.456877785 +0000 UTC m=+148.386173713" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.475822 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" event={"ID":"d76477a2-14d0-4d86-b850-a980bf3ca21a","Type":"ContainerStarted","Data":"e95397ca5fdb3ac963d65cdc7003f0cd4ac7c47aa4ff22a4aaf7a7af6e357f6b"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.477165 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.478489 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.97846952 +0000 UTC m=+148.907765458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.489654 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7wh82" podStartSLOduration=127.489635871 podStartE2EDuration="2m7.489635871s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.488549644 +0000 UTC m=+148.417845602" watchObservedRunningTime="2026-02-16 20:58:50.489635871 +0000 UTC m=+148.418931809" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.490926 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kk27l" podStartSLOduration=127.490920594 podStartE2EDuration="2m7.490920594s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.459131162 +0000 UTC m=+148.388427100" watchObservedRunningTime="2026-02-16 20:58:50.490920594 +0000 UTC m=+148.420216532" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.519861 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nlp5w" event={"ID":"d5952073-af7a-4268-b4e2-ad8e98b0e02a","Type":"ContainerStarted","Data":"ce45af2e1980b8f97cb55acbdea4579774b4e3ccb8a3b7c5fcaafb7ce50311d9"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.519930 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nlp5w" event={"ID":"d5952073-af7a-4268-b4e2-ad8e98b0e02a","Type":"ContainerStarted","Data":"ac164f896e9166c08e5fbaa30118a1c948d23c637de3b3bbb667c40ad873bca4"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.520805 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.534706 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8hwk8" podStartSLOduration=127.534689198 podStartE2EDuration="2m7.534689198s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.532892393 +0000 UTC m=+148.462188331" watchObservedRunningTime="2026-02-16 20:58:50.534689198 +0000 UTC m=+148.463985146" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.542408 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" event={"ID":"7f1912bb-76f1-493c-b982-2a75e48cb649","Type":"ContainerStarted","Data":"8c98beb367cdc37604b5840391ae189203b390dfd8e12274e9049a793c921d8d"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.542459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" event={"ID":"7f1912bb-76f1-493c-b982-2a75e48cb649","Type":"ContainerStarted","Data":"42e37c8c4a724828f5082ce360a7da17ca0f50656208ed7e7cf1158d0b555568"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.558692 4811 generic.go:334] "Generic (PLEG): container finished" podID="b9fba317-198f-48f6-b678-9a0df33df707" containerID="e7209c6701430fb1944c9f454254fd96e41e2ac36b60c1450eb024878c6def0b" exitCode=0 Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.558794 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" event={"ID":"b9fba317-198f-48f6-b678-9a0df33df707","Type":"ContainerDied","Data":"e7209c6701430fb1944c9f454254fd96e41e2ac36b60c1450eb024878c6def0b"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.571567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" event={"ID":"82635068-e556-4de1-be36-160c60aed1d4","Type":"ContainerStarted","Data":"15a724292f502f9bd5e777726e11fadc1d104e78996ad8c8d67dd29d280cb425"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.572681 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.584339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.588785 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.088765043 +0000 UTC m=+149.018060981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.610399 4811 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bpcmh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.610475 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" podUID="82635068-e556-4de1-be36-160c60aed1d4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.614978 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nlp5w" podStartSLOduration=9.614966084 podStartE2EDuration="9.614966084s" podCreationTimestamp="2026-02-16 20:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.566900181 +0000 UTC m=+148.496196119" watchObservedRunningTime="2026-02-16 20:58:50.614966084 +0000 UTC m=+148.544262022" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.617516 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" event={"ID":"8ae95b88-69b4-470b-9551-6f6412d991ac","Type":"ContainerStarted","Data":"9552f92386f3d4e519676dc27295e5a52fb3342ebe5f939769cf5c46269bcb23"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.618786 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.637797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" event={"ID":"eddedb9f-4d8f-467e-94a0-3e2b45746f42","Type":"ContainerStarted","Data":"564b621d4c1c93455f9dcd3cd7b144bb6315725963bd1d79a2e13397f626a08e"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.640052 4811 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vx5rb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.640093 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" podUID="8ae95b88-69b4-470b-9551-6f6412d991ac" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.650504 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" event={"ID":"2b8eb903-1c76-4c64-b9ec-f33f22e756cf","Type":"ContainerStarted","Data":"e8d39ea8c4b8755f8b1e009abe21fcbdf87aafd60e63f5d4d45e6e9250a1ac18"} Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.650550 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.655176 4811 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9ckm2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.655260 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" podUID="2b8eb903-1c76-4c64-b9ec-f33f22e756cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.667403 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n8rd6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.667535 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.689623 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4km29" podStartSLOduration=127.689601677 podStartE2EDuration="2m7.689601677s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.628352962 +0000 UTC m=+148.557648910" watchObservedRunningTime="2026-02-16 20:58:50.689601677 +0000 UTC m=+148.618897635" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.691002 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4811]: E0216 20:58:50.692040 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.192026368 +0000 UTC m=+149.121322306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.870220 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" podStartSLOduration=127.870180203 podStartE2EDuration="2m7.870180203s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.808126098 +0000 UTC m=+148.737422036" watchObservedRunningTime="2026-02-16 20:58:50.870180203 +0000 UTC m=+148.799476141" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.900647 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" podStartSLOduration=127.900616541 podStartE2EDuration="2m7.900616541s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.882548955 +0000 UTC m=+148.811844913" watchObservedRunningTime="2026-02-16 20:58:50.900616541 +0000 UTC m=+148.829912479" Feb 16 20:58:50 crc kubenswrapper[4811]: I0216 20:58:50.911248 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.014838 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.514816352 +0000 UTC m=+149.444112290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.016775 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.023857 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.5238439 +0000 UTC m=+149.453139838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.023943 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.024313 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.524306142 +0000 UTC m=+149.453602080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.083079 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" podStartSLOduration=128.083059964 podStartE2EDuration="2m8.083059964s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.963016365 +0000 UTC m=+148.892312303" watchObservedRunningTime="2026-02-16 20:58:51.083059964 +0000 UTC m=+149.012355902" Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.131050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.135432 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.635407134 +0000 UTC m=+149.564703072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.162885 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.163701 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.663684017 +0000 UTC m=+149.592979955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.271256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.271705 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.771684842 +0000 UTC m=+149.700980780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.271785 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.272134 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.772117963 +0000 UTC m=+149.701413901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.336492 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:51 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:51 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:51 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.336947 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.373625 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.374053 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.874030475 +0000 UTC m=+149.803326413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.475149 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.475556 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.975537016 +0000 UTC m=+149.904832954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.576079 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.576375 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.076333769 +0000 UTC m=+150.005629707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.576450 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.576773 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.07676007 +0000 UTC m=+150.006056008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.654211 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b5de7ba998e9a22f9029b92267d4570176342d66daaa72ed14335e457adcd300"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.654281 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"26e01b3d52b63319daac56c276973cfb088e13d6a83982978b223597ec9de555"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.656951 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bba40bab28da1cd4bdeef92affbfafbc5b81cecd3be81b32872a46e68727eab7"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.657009 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4140e842b88ad2a083f3a3873d09a7f4e001220cc5a9fc42dc747546fe93fb61"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.666834 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" event={"ID":"b9fba317-198f-48f6-b678-9a0df33df707","Type":"ContainerStarted","Data":"16ee4a8b60a848c182252f6fe0c9e8dfe045fc82770b2a36dd0f0add0b12fbb5"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.667246 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.675373 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bc201653b924cd1e75b5c0d90307cb38029b9de9a1cc6799952622d49be294b6"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.675436 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1b9997ca82b109a4420a44fe1bb62f2c6b7b55f35e0831ff16f3576f0c1f39a1"} Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.676626 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n8rd6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.676679 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.678015 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.678242 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.178220839 +0000 UTC m=+150.107516777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.678688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.679013 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.179001149 +0000 UTC m=+150.108297087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.684459 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9ckm2" Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.756873 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vx5rb" Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.781808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.783231 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.283208958 +0000 UTC m=+150.212504896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.783985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.784424 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.284403379 +0000 UTC m=+150.213699317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.885732 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.886148 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.386131445 +0000 UTC m=+150.315427383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4811]: I0216 20:58:51.987073 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:51 crc kubenswrapper[4811]: E0216 20:58:51.987608 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.487585765 +0000 UTC m=+150.416881703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.088758 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.089186 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.589171358 +0000 UTC m=+150.518467296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.168670 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" podStartSLOduration=129.168648913 podStartE2EDuration="2m9.168648913s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:52.097308373 +0000 UTC m=+150.026604331" watchObservedRunningTime="2026-02-16 20:58:52.168648913 +0000 UTC m=+150.097944851" Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.190142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.190699 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.690677129 +0000 UTC m=+150.619973067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.294039 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.294228 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.794189131 +0000 UTC m=+150.723485069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.294436 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.294929 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.794913469 +0000 UTC m=+150.724209407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.330063 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:52 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:52 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:52 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.330599 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.390866 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bpcmh" Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.395406 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.395643 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.89560262 +0000 UTC m=+150.824898558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.395724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.396171 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.896163094 +0000 UTC m=+150.825459032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.497189 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.497645 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.997628564 +0000 UTC m=+150.926924502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.565032 4811 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.598775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.599233 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.099211667 +0000 UTC m=+151.028507605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.684133 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" event={"ID":"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1","Type":"ContainerStarted","Data":"641712c45a4f8d185b2e9ef8d120417d48b372074303913567b4018812305f48"} Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.684181 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" event={"ID":"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1","Type":"ContainerStarted","Data":"380f47620c6cc3efe1a42d091987797d9e07ae8dc25684f2d094a5b1c615737c"} Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.684281 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" event={"ID":"eef7166a-3f42-481e-9ecd-b8a6a8bc4cc1","Type":"ContainerStarted","Data":"09189d1669cace3672e5157c0adef9c841d13fc043b5c923e87fa932f12250a6"} Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.702294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.702493 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.202464502 +0000 UTC m=+151.131760440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.702652 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.702986 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.202974105 +0000 UTC m=+151.132270043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.723847 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mvkhm" podStartSLOduration=11.723819101 podStartE2EDuration="11.723819101s" podCreationTimestamp="2026-02-16 20:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:52.715894591 +0000 UTC m=+150.645190529" watchObservedRunningTime="2026-02-16 20:58:52.723819101 +0000 UTC m=+150.653115039" Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.803718 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.803971 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.303925222 +0000 UTC m=+151.233221170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.804289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:52 crc kubenswrapper[4811]: E0216 20:58:52.804966 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.304949708 +0000 UTC m=+151.234245646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z425" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.887247 4811 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T20:58:52.565064195Z","Handler":null,"Name":""} Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.890009 4811 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.890057 4811 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.905543 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4811]: I0216 20:58:52.918642 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.007372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.010980 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.011020 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.027566 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.027636 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.036375 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z425\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.043580 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.098295 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.227859 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bz6d7"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.229455 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.232223 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.248661 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz6d7"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.312705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-utilities\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.312821 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-catalog-content\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.312864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xmv\" (UniqueName: \"kubernetes.io/projected/08f82c33-6a50-480c-b780-e95a09a3e064-kube-api-access-58xmv\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.351574 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:53 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:53 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:53 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.351682 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.408025 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.408824 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.410737 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.424524 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.424763 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-utilities\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.424838 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-catalog-content\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.424872 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xmv\" (UniqueName: \"kubernetes.io/projected/08f82c33-6a50-480c-b780-e95a09a3e064-kube-api-access-58xmv\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.428537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-catalog-content\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.428552 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-utilities\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.430164 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.432674 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jtdt8"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.433986 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.441999 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtdt8"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.442338 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.488292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xmv\" (UniqueName: \"kubernetes.io/projected/08f82c33-6a50-480c-b780-e95a09a3e064-kube-api-access-58xmv\") pod \"community-operators-bz6d7\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.525983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.526045 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278zz\" (UniqueName: \"kubernetes.io/projected/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-kube-api-access-278zz\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.526086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.526111 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-catalog-content\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.526301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-utilities\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.547188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.617678 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cspmf"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.619078 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.628753 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.628823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-catalog-content\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.628888 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-utilities\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.628929 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.628963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278zz\" (UniqueName: \"kubernetes.io/projected/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-kube-api-access-278zz\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.629432 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.629924 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-catalog-content\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.630188 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-utilities\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.648808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.653906 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cspmf"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.666987 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278zz\" (UniqueName: \"kubernetes.io/projected/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-kube-api-access-278zz\") pod \"certified-operators-jtdt8\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.722226 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-njf2g" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.732270 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lzl\" (UniqueName: \"kubernetes.io/projected/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-kube-api-access-p5lzl\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.732363 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-utilities\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.732401 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-catalog-content\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.737212 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.761151 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.790287 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z425"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.839463 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lzl\" (UniqueName: \"kubernetes.io/projected/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-kube-api-access-p5lzl\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.839783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-utilities\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.839843 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-catalog-content\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.842690 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mm9g2"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.842734 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-utilities\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.842944 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-catalog-content\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.844102 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.861112 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm9g2"] Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.886047 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-mn795 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.886128 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mn795" podUID="56ef6d7e-b0bf-4bfa-8426-68040e136fe1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.886176 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-mn795 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.886241 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mn795" podUID="56ef6d7e-b0bf-4bfa-8426-68040e136fe1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.890930 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-cch5x" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.913334 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ktp4v" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.921964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lzl\" (UniqueName: \"kubernetes.io/projected/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-kube-api-access-p5lzl\") pod \"community-operators-cspmf\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.946825 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.948576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-utilities\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.948630 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-catalog-content\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.948729 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2dq6\" (UniqueName: \"kubernetes.io/projected/4f877237-a18d-42d1-9727-d62eb52ea19c-kube-api-access-h2dq6\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:53 crc kubenswrapper[4811]: I0216 20:58:53.971378 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz6d7"] Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.050060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2dq6\" (UniqueName: \"kubernetes.io/projected/4f877237-a18d-42d1-9727-d62eb52ea19c-kube-api-access-h2dq6\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.050662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-utilities\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.050695 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-catalog-content\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.051334 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-catalog-content\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.055027 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-utilities\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.081435 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2dq6\" (UniqueName: \"kubernetes.io/projected/4f877237-a18d-42d1-9727-d62eb52ea19c-kube-api-access-h2dq6\") pod \"certified-operators-mm9g2\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.185436 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.185908 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.198373 4811 patch_prober.go:28] interesting pod/console-f9d7485db-8vgph container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.198434 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8vgph" podUID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.222536 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.321447 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.344129 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:54 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:54 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:54 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.344220 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.349006 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtdt8"] Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.359582 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.377970 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cspmf"] Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.396442 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 20:58:54 crc kubenswrapper[4811]: W0216 20:58:54.398828 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b78b5a_3cbf_4b80_8831_8f522bf2a2e5.slice/crio-fb44bd4f132507c7671248487dacbc2bf59b357b07f10e3c88b15bc0162eb367 WatchSource:0}: Error finding container fb44bd4f132507c7671248487dacbc2bf59b357b07f10e3c88b15bc0162eb367: Status 404 returned error can't find the container with id fb44bd4f132507c7671248487dacbc2bf59b357b07f10e3c88b15bc0162eb367 Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.677043 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm9g2"] Feb 16 20:58:54 crc kubenswrapper[4811]: W0216 20:58:54.690700 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f877237_a18d_42d1_9727_d62eb52ea19c.slice/crio-9cce0de1aab578aca5c6d2c48f05f4c5d401d911417c6b26aa228814c985b23b WatchSource:0}: Error finding container 9cce0de1aab578aca5c6d2c48f05f4c5d401d911417c6b26aa228814c985b23b: Status 404 returned error can't find the container with id 9cce0de1aab578aca5c6d2c48f05f4c5d401d911417c6b26aa228814c985b23b Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.729240 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.733159 4811 generic.go:334] "Generic (PLEG): container finished" podID="9195c217-c5bc-4625-9b9c-2aa209485e3c" containerID="32273f40296cb069a1d2afbb7235ee062b126f01eea8680d182bc4166a644b08" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.733254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" event={"ID":"9195c217-c5bc-4625-9b9c-2aa209485e3c","Type":"ContainerDied","Data":"32273f40296cb069a1d2afbb7235ee062b126f01eea8680d182bc4166a644b08"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.740096 4811 generic.go:334] "Generic (PLEG): container finished" podID="08f82c33-6a50-480c-b780-e95a09a3e064" containerID="e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.740155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerDied","Data":"e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.740184 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerStarted","Data":"d23045af8c3156f7285052ed97eec053caa6abfa3669a6037f5637c76e512cb7"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.742262 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.748608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b","Type":"ContainerStarted","Data":"89de4452cd924df129e151926a2ec882ef5319b2dc3969f40cf2ba27bce5e629"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.761972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" event={"ID":"20c76084-401b-41ca-ad08-2752d2d7132b","Type":"ContainerStarted","Data":"0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.762293 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" event={"ID":"20c76084-401b-41ca-ad08-2752d2d7132b","Type":"ContainerStarted","Data":"b368117ba3aebfba02513d6a32c5ca5f0ebfa0c43e99fc8257a241b99c5220d1"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.762409 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.763787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9g2" event={"ID":"4f877237-a18d-42d1-9727-d62eb52ea19c","Type":"ContainerStarted","Data":"9cce0de1aab578aca5c6d2c48f05f4c5d401d911417c6b26aa228814c985b23b"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.766800 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cspmf" event={"ID":"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5","Type":"ContainerStarted","Data":"fb44bd4f132507c7671248487dacbc2bf59b357b07f10e3c88b15bc0162eb367"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.768272 4811 generic.go:334] "Generic (PLEG): container finished" podID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerID="a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.768863 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtdt8" event={"ID":"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1","Type":"ContainerDied","Data":"a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.768899 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtdt8" event={"ID":"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1","Type":"ContainerStarted","Data":"f3f5d76128210bc1b30a7d7d1212971b13df018abe458cb59dfe60370a506368"} Feb 16 20:58:54 crc kubenswrapper[4811]: I0216 20:58:54.832329 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" podStartSLOduration=131.832307577 podStartE2EDuration="2m11.832307577s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:54.799014927 +0000 UTC m=+152.728310885" watchObservedRunningTime="2026-02-16 20:58:54.832307577 +0000 UTC m=+152.761603525" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.329086 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:55 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:55 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:55 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.330609 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.396554 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.421205 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v7grl"] Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.422434 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.424723 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.439713 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7grl"] Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.483778 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-catalog-content\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.483850 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8jt\" (UniqueName: \"kubernetes.io/projected/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-kube-api-access-vj8jt\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.483918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-utilities\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.585823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-catalog-content\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.585883 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj8jt\" (UniqueName: \"kubernetes.io/projected/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-kube-api-access-vj8jt\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.585912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-utilities\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.586449 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-utilities\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.586600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-catalog-content\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.622505 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj8jt\" (UniqueName: \"kubernetes.io/projected/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-kube-api-access-vj8jt\") pod \"redhat-marketplace-v7grl\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.760088 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.797165 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b","Type":"ContainerStarted","Data":"6f3a7b580a3e6b19f3671bcd46af3528e58bde79fec2c518215c80c6b46d0b6d"} Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.804670 4811 generic.go:334] "Generic (PLEG): container finished" podID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerID="8a64a71cce89bc7916b6342721ea2f8e7e45dbc673236e1deeaf635f15d6b407" exitCode=0 Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.804755 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9g2" event={"ID":"4f877237-a18d-42d1-9727-d62eb52ea19c","Type":"ContainerDied","Data":"8a64a71cce89bc7916b6342721ea2f8e7e45dbc673236e1deeaf635f15d6b407"} Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.822951 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.822930581 podStartE2EDuration="2.822930581s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:55.820390536 +0000 UTC m=+153.749686474" watchObservedRunningTime="2026-02-16 20:58:55.822930581 +0000 UTC m=+153.752226519" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.829547 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p4t"] Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.831031 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.832657 4811 generic.go:334] "Generic (PLEG): container finished" podID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerID="2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4" exitCode=0 Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.833485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cspmf" event={"ID":"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5","Type":"ContainerDied","Data":"2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4"} Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.849168 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p4t"] Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.990236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-utilities\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.990278 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-catalog-content\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:55 crc kubenswrapper[4811]: I0216 20:58:55.990380 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcnjc\" (UniqueName: \"kubernetes.io/projected/0222622d-0fbe-4f15-8a2b-049a68617336-kube-api-access-hcnjc\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.092310 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcnjc\" (UniqueName: \"kubernetes.io/projected/0222622d-0fbe-4f15-8a2b-049a68617336-kube-api-access-hcnjc\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.092401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-utilities\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.092423 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-catalog-content\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.092902 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-catalog-content\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.093822 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-utilities\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.124140 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.132723 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcnjc\" (UniqueName: \"kubernetes.io/projected/0222622d-0fbe-4f15-8a2b-049a68617336-kube-api-access-hcnjc\") pod \"redhat-marketplace-k7p4t\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.134289 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.146728 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.147019 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.154558 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.244722 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.299673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f50f5bf-5a57-46be-9a83-723597624d23-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.299760 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f50f5bf-5a57-46be-9a83-723597624d23-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.329812 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:56 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:56 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:56 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.329867 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.358298 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.402601 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f50f5bf-5a57-46be-9a83-723597624d23-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.402723 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f50f5bf-5a57-46be-9a83-723597624d23-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.402992 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f50f5bf-5a57-46be-9a83-723597624d23-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.432112 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcqch"] Feb 16 20:58:56 crc kubenswrapper[4811]: E0216 20:58:56.436207 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9195c217-c5bc-4625-9b9c-2aa209485e3c" containerName="collect-profiles" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.436234 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9195c217-c5bc-4625-9b9c-2aa209485e3c" containerName="collect-profiles" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.436352 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9195c217-c5bc-4625-9b9c-2aa209485e3c" containerName="collect-profiles" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.437161 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.444930 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7grl"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.446089 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f50f5bf-5a57-46be-9a83-723597624d23-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.451954 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.466033 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqch"] Feb 16 20:58:56 crc kubenswrapper[4811]: W0216 20:58:56.498856 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefc370a5_e41c_4eb0_8b79_44a3570cc5a8.slice/crio-4cfb8c7d83ee04b0c7caa885ed7f185506559485e0fa9008407c35140e51cac6 WatchSource:0}: Error finding container 4cfb8c7d83ee04b0c7caa885ed7f185506559485e0fa9008407c35140e51cac6: Status 404 returned error can't find the container with id 4cfb8c7d83ee04b0c7caa885ed7f185506559485e0fa9008407c35140e51cac6 Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.504102 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x6d8\" (UniqueName: \"kubernetes.io/projected/9195c217-c5bc-4625-9b9c-2aa209485e3c-kube-api-access-9x6d8\") pod \"9195c217-c5bc-4625-9b9c-2aa209485e3c\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.504164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9195c217-c5bc-4625-9b9c-2aa209485e3c-secret-volume\") pod \"9195c217-c5bc-4625-9b9c-2aa209485e3c\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.504279 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9195c217-c5bc-4625-9b9c-2aa209485e3c-config-volume\") pod \"9195c217-c5bc-4625-9b9c-2aa209485e3c\" (UID: \"9195c217-c5bc-4625-9b9c-2aa209485e3c\") " Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.505478 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9195c217-c5bc-4625-9b9c-2aa209485e3c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9195c217-c5bc-4625-9b9c-2aa209485e3c" (UID: "9195c217-c5bc-4625-9b9c-2aa209485e3c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.511437 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.517611 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9195c217-c5bc-4625-9b9c-2aa209485e3c-kube-api-access-9x6d8" (OuterVolumeSpecName: "kube-api-access-9x6d8") pod "9195c217-c5bc-4625-9b9c-2aa209485e3c" (UID: "9195c217-c5bc-4625-9b9c-2aa209485e3c"). InnerVolumeSpecName "kube-api-access-9x6d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.527205 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9195c217-c5bc-4625-9b9c-2aa209485e3c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9195c217-c5bc-4625-9b9c-2aa209485e3c" (UID: "9195c217-c5bc-4625-9b9c-2aa209485e3c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.605941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-utilities\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.606423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-catalog-content\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.606449 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlqt\" (UniqueName: \"kubernetes.io/projected/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-kube-api-access-lwlqt\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.606516 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x6d8\" (UniqueName: \"kubernetes.io/projected/9195c217-c5bc-4625-9b9c-2aa209485e3c-kube-api-access-9x6d8\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.606531 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9195c217-c5bc-4625-9b9c-2aa209485e3c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.606539 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9195c217-c5bc-4625-9b9c-2aa209485e3c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.625500 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zzqvw"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.630425 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.643853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzqvw"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.707403 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-utilities\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.707448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbsmh\" (UniqueName: \"kubernetes.io/projected/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-kube-api-access-dbsmh\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.707491 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-utilities\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.707510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-catalog-content\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.707530 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlqt\" (UniqueName: \"kubernetes.io/projected/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-kube-api-access-lwlqt\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.707558 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-catalog-content\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.708117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-utilities\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.708374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-catalog-content\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.737617 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlqt\" (UniqueName: \"kubernetes.io/projected/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-kube-api-access-lwlqt\") pod \"redhat-operators-dcqch\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.767859 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p4t"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.809362 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-utilities\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.809553 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbsmh\" (UniqueName: \"kubernetes.io/projected/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-kube-api-access-dbsmh\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.809670 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-catalog-content\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.814435 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-catalog-content\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.815536 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.821756 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-utilities\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.847437 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbsmh\" (UniqueName: \"kubernetes.io/projected/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-kube-api-access-dbsmh\") pod \"redhat-operators-zzqvw\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.862031 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p4t" event={"ID":"0222622d-0fbe-4f15-8a2b-049a68617336","Type":"ContainerStarted","Data":"e45fbdd2614c47c8b80b762be5cdd9cb6eb280cae2b36c46e9131b751534f242"} Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.929076 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.941166 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7grl" event={"ID":"efc370a5-e41c-4eb0-8b79-44a3570cc5a8","Type":"ContainerStarted","Data":"4cfb8c7d83ee04b0c7caa885ed7f185506559485e0fa9008407c35140e51cac6"} Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.959498 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-qjhps_218883f2-cdcd-4b76-8f3c-dea0af40092c/cluster-samples-operator/0.log" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.959570 4811 generic.go:334] "Generic (PLEG): container finished" podID="218883f2-cdcd-4b76-8f3c-dea0af40092c" containerID="095e8077fc811b83e17cecdfc1c6409ef51469dc6043c9d9aca0a2dc69a04a9f" exitCode=2 Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.959646 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" event={"ID":"218883f2-cdcd-4b76-8f3c-dea0af40092c","Type":"ContainerDied","Data":"095e8077fc811b83e17cecdfc1c6409ef51469dc6043c9d9aca0a2dc69a04a9f"} Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.960253 4811 scope.go:117] "RemoveContainer" containerID="095e8077fc811b83e17cecdfc1c6409ef51469dc6043c9d9aca0a2dc69a04a9f" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.961996 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.972606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" event={"ID":"9195c217-c5bc-4625-9b9c-2aa209485e3c","Type":"ContainerDied","Data":"d52f5f3134d1206254db713cc9562a044fd45782b24e18e9176ce2a1a3531616"} Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.972656 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.972683 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d52f5f3134d1206254db713cc9562a044fd45782b24e18e9176ce2a1a3531616" Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.991646 4811 generic.go:334] "Generic (PLEG): container finished" podID="f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b" containerID="6f3a7b580a3e6b19f3671bcd46af3528e58bde79fec2c518215c80c6b46d0b6d" exitCode=0 Feb 16 20:58:56 crc kubenswrapper[4811]: I0216 20:58:56.991704 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b","Type":"ContainerDied","Data":"6f3a7b580a3e6b19f3671bcd46af3528e58bde79fec2c518215c80c6b46d0b6d"} Feb 16 20:58:57 crc kubenswrapper[4811]: I0216 20:58:57.292023 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqch"] Feb 16 20:58:57 crc kubenswrapper[4811]: I0216 20:58:57.325035 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:57 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:57 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:57 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:57 crc kubenswrapper[4811]: I0216 20:58:57.325104 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:57 crc kubenswrapper[4811]: I0216 20:58:57.480604 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzqvw"] Feb 16 20:58:57 crc kubenswrapper[4811]: W0216 20:58:57.512904 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5c03e5_f6e2_4bef_ae66_e85fdb1b54dd.slice/crio-9c37520ba6c27aa418e74d428cfcb3034009055a4e845fde19a338d55543ad21 WatchSource:0}: Error finding container 9c37520ba6c27aa418e74d428cfcb3034009055a4e845fde19a338d55543ad21: Status 404 returned error can't find the container with id 9c37520ba6c27aa418e74d428cfcb3034009055a4e845fde19a338d55543ad21 Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.020330 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-qjhps_218883f2-cdcd-4b76-8f3c-dea0af40092c/cluster-samples-operator/0.log" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.020509 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qjhps" event={"ID":"218883f2-cdcd-4b76-8f3c-dea0af40092c","Type":"ContainerStarted","Data":"083983bed7f30768bcefd98c44a712118f5af8214f8b5aa56f989c986aebbdef"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.043996 4811 generic.go:334] "Generic (PLEG): container finished" podID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerID="839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1" exitCode=0 Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.044163 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzqvw" event={"ID":"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd","Type":"ContainerDied","Data":"839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.044229 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzqvw" event={"ID":"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd","Type":"ContainerStarted","Data":"9c37520ba6c27aa418e74d428cfcb3034009055a4e845fde19a338d55543ad21"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.049398 4811 generic.go:334] "Generic (PLEG): container finished" podID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerID="fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b" exitCode=0 Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.049538 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerDied","Data":"fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.049567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerStarted","Data":"713839fb06a3be82a10ece9b92dbcdbc1b88103fb4c7cc4d38294f9cc0877b57"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.055922 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8f50f5bf-5a57-46be-9a83-723597624d23","Type":"ContainerStarted","Data":"c4eb0df60b8e540c6e5c81d0aa958344ec6320a6461bb4fb14234ed27905692d"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.055972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8f50f5bf-5a57-46be-9a83-723597624d23","Type":"ContainerStarted","Data":"55679464abafe3e581201e7503dc68d0928fac673db69b7cd0071506b96ffb3f"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.067489 4811 generic.go:334] "Generic (PLEG): container finished" podID="0222622d-0fbe-4f15-8a2b-049a68617336" containerID="e75b8d21573ba94264f17a5cad2472c01ac7bf71914f1cd935c9a0dd9fadd412" exitCode=0 Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.067630 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p4t" event={"ID":"0222622d-0fbe-4f15-8a2b-049a68617336","Type":"ContainerDied","Data":"e75b8d21573ba94264f17a5cad2472c01ac7bf71914f1cd935c9a0dd9fadd412"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.090588 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7grl" event={"ID":"efc370a5-e41c-4eb0-8b79-44a3570cc5a8","Type":"ContainerDied","Data":"847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab"} Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.090524 4811 generic.go:334] "Generic (PLEG): container finished" podID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerID="847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab" exitCode=0 Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.132465 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.132440949 podStartE2EDuration="2.132440949s" podCreationTimestamp="2026-02-16 20:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:58.129514946 +0000 UTC m=+156.058810904" watchObservedRunningTime="2026-02-16 20:58:58.132440949 +0000 UTC m=+156.061736887" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.339458 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:58 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:58 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:58 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.339854 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.492561 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.569821 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kube-api-access\") pod \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.570051 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kubelet-dir\") pod \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\" (UID: \"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b\") " Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.570350 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b" (UID: "f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.601453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b" (UID: "f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.671333 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:58 crc kubenswrapper[4811]: I0216 20:58:58.671366 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.184770 4811 generic.go:334] "Generic (PLEG): container finished" podID="8f50f5bf-5a57-46be-9a83-723597624d23" containerID="c4eb0df60b8e540c6e5c81d0aa958344ec6320a6461bb4fb14234ed27905692d" exitCode=0 Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.184975 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8f50f5bf-5a57-46be-9a83-723597624d23","Type":"ContainerDied","Data":"c4eb0df60b8e540c6e5c81d0aa958344ec6320a6461bb4fb14234ed27905692d"} Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.197377 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b","Type":"ContainerDied","Data":"89de4452cd924df129e151926a2ec882ef5319b2dc3969f40cf2ba27bce5e629"} Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.197434 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89de4452cd924df129e151926a2ec882ef5319b2dc3969f40cf2ba27bce5e629" Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.197505 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.323074 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:59 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:58:59 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:58:59 crc kubenswrapper[4811]: healthz check failed Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.323145 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.702107 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nlp5w" Feb 16 20:58:59 crc kubenswrapper[4811]: I0216 20:58:59.844046 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.333390 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:59:00 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:59:00 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:59:00 crc kubenswrapper[4811]: healthz check failed Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.333458 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.596012 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.661828 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f50f5bf-5a57-46be-9a83-723597624d23-kube-api-access\") pod \"8f50f5bf-5a57-46be-9a83-723597624d23\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.661989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f50f5bf-5a57-46be-9a83-723597624d23-kubelet-dir\") pod \"8f50f5bf-5a57-46be-9a83-723597624d23\" (UID: \"8f50f5bf-5a57-46be-9a83-723597624d23\") " Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.662743 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50f5bf-5a57-46be-9a83-723597624d23-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f50f5bf-5a57-46be-9a83-723597624d23" (UID: "8f50f5bf-5a57-46be-9a83-723597624d23"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.696404 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f50f5bf-5a57-46be-9a83-723597624d23-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f50f5bf-5a57-46be-9a83-723597624d23" (UID: "8f50f5bf-5a57-46be-9a83-723597624d23"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.769671 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f50f5bf-5a57-46be-9a83-723597624d23-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:00 crc kubenswrapper[4811]: I0216 20:59:00.769715 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f50f5bf-5a57-46be-9a83-723597624d23-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:01 crc kubenswrapper[4811]: I0216 20:59:01.267903 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8f50f5bf-5a57-46be-9a83-723597624d23","Type":"ContainerDied","Data":"55679464abafe3e581201e7503dc68d0928fac673db69b7cd0071506b96ffb3f"} Feb 16 20:59:01 crc kubenswrapper[4811]: I0216 20:59:01.267954 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55679464abafe3e581201e7503dc68d0928fac673db69b7cd0071506b96ffb3f" Feb 16 20:59:01 crc kubenswrapper[4811]: I0216 20:59:01.268030 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:59:01 crc kubenswrapper[4811]: I0216 20:59:01.323747 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:59:01 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:59:01 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:59:01 crc kubenswrapper[4811]: healthz check failed Feb 16 20:59:01 crc kubenswrapper[4811]: I0216 20:59:01.323828 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:59:02 crc kubenswrapper[4811]: I0216 20:59:02.321519 4811 patch_prober.go:28] interesting pod/router-default-5444994796-lbxk8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:59:02 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 16 20:59:02 crc kubenswrapper[4811]: [+]process-running ok Feb 16 20:59:02 crc kubenswrapper[4811]: healthz check failed Feb 16 20:59:02 crc kubenswrapper[4811]: I0216 20:59:02.322036 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lbxk8" podUID="2d817b52-21fc-40d9-a36f-487e6719ebfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:59:03 crc kubenswrapper[4811]: I0216 20:59:03.322562 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:59:03 crc kubenswrapper[4811]: I0216 20:59:03.330278 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lbxk8" Feb 16 20:59:03 crc kubenswrapper[4811]: I0216 20:59:03.873670 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mn795" Feb 16 20:59:04 crc kubenswrapper[4811]: I0216 20:59:04.191949 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:59:04 crc kubenswrapper[4811]: I0216 20:59:04.196358 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 20:59:06 crc kubenswrapper[4811]: I0216 20:59:06.087028 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:59:06 crc kubenswrapper[4811]: I0216 20:59:06.114435 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b4c0a11-23d9-412e-a5d8-120d622bef57-metrics-certs\") pod \"network-metrics-daemon-7nk7k\" (UID: \"1b4c0a11-23d9-412e-a5d8-120d622bef57\") " pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:59:06 crc kubenswrapper[4811]: I0216 20:59:06.127574 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7nk7k" Feb 16 20:59:13 crc kubenswrapper[4811]: I0216 20:59:13.134101 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 20:59:18 crc kubenswrapper[4811]: I0216 20:59:18.363879 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 20:59:18 crc kubenswrapper[4811]: I0216 20:59:18.364697 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 20:59:22 crc kubenswrapper[4811]: I0216 20:59:22.443959 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4f8kg"] Feb 16 20:59:24 crc kubenswrapper[4811]: I0216 20:59:24.310113 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gdcrk" Feb 16 20:59:24 crc kubenswrapper[4811]: E0216 20:59:24.719936 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 20:59:24 crc kubenswrapper[4811]: E0216 20:59:24.720218 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5lzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cspmf_openshift-marketplace(14b78b5a-3cbf-4b80-8831-8f522bf2a2e5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 20:59:24 crc kubenswrapper[4811]: E0216 20:59:24.721512 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cspmf" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" Feb 16 20:59:24 crc kubenswrapper[4811]: E0216 20:59:24.811358 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 20:59:24 crc kubenswrapper[4811]: E0216 20:59:24.811528 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58xmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bz6d7_openshift-marketplace(08f82c33-6a50-480c-b780-e95a09a3e064): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 20:59:24 crc kubenswrapper[4811]: E0216 20:59:24.812935 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bz6d7" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.254593 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cspmf" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.255476 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bz6d7" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.361886 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.362129 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jtdt8_openshift-marketplace(aabb6f4a-05fd-4f4f-9211-81884fdd4bb1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.363272 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jtdt8" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.401803 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.402376 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwlqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqch_openshift-marketplace(764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.404044 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dcqch" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.427453 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.427662 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2dq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mm9g2_openshift-marketplace(4f877237-a18d-42d1-9727-d62eb52ea19c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.428926 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mm9g2" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.500837 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mm9g2" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.501245 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jtdt8" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" Feb 16 20:59:26 crc kubenswrapper[4811]: E0216 20:59:26.503897 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dcqch" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" Feb 16 20:59:26 crc kubenswrapper[4811]: I0216 20:59:26.685225 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7nk7k"] Feb 16 20:59:26 crc kubenswrapper[4811]: W0216 20:59:26.726993 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b4c0a11_23d9_412e_a5d8_120d622bef57.slice/crio-3ba0de5188f42a157345aea90b7c2d15bfb71184cf7dc21cfd2de74c6b172b03 WatchSource:0}: Error finding container 3ba0de5188f42a157345aea90b7c2d15bfb71184cf7dc21cfd2de74c6b172b03: Status 404 returned error can't find the container with id 3ba0de5188f42a157345aea90b7c2d15bfb71184cf7dc21cfd2de74c6b172b03 Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.510417 4811 generic.go:334] "Generic (PLEG): container finished" podID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerID="bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9" exitCode=0 Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.510514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7grl" event={"ID":"efc370a5-e41c-4eb0-8b79-44a3570cc5a8","Type":"ContainerDied","Data":"bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9"} Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.516802 4811 generic.go:334] "Generic (PLEG): container finished" podID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerID="2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e" exitCode=0 Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.516868 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzqvw" event={"ID":"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd","Type":"ContainerDied","Data":"2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e"} Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.524776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" event={"ID":"1b4c0a11-23d9-412e-a5d8-120d622bef57","Type":"ContainerStarted","Data":"00df922ce4740e6a08053c788ec40d38210a97853a6f0ccfa170cdb5234dbf30"} Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.524859 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" event={"ID":"1b4c0a11-23d9-412e-a5d8-120d622bef57","Type":"ContainerStarted","Data":"3ba0de5188f42a157345aea90b7c2d15bfb71184cf7dc21cfd2de74c6b172b03"} Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.541853 4811 generic.go:334] "Generic (PLEG): container finished" podID="0222622d-0fbe-4f15-8a2b-049a68617336" containerID="b6104e0c92b744974a536189ea0c6722bfdcf0e2ac1d5f831059618add867255" exitCode=0 Feb 16 20:59:27 crc kubenswrapper[4811]: I0216 20:59:27.541901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p4t" event={"ID":"0222622d-0fbe-4f15-8a2b-049a68617336","Type":"ContainerDied","Data":"b6104e0c92b744974a536189ea0c6722bfdcf0e2ac1d5f831059618add867255"} Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.550982 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzqvw" event={"ID":"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd","Type":"ContainerStarted","Data":"1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638"} Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.553077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p4t" event={"ID":"0222622d-0fbe-4f15-8a2b-049a68617336","Type":"ContainerStarted","Data":"9b19ee31e4397a7be01c1b90803166ed1e8680afabf755715b316065bb1c5d18"} Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.555111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7nk7k" event={"ID":"1b4c0a11-23d9-412e-a5d8-120d622bef57","Type":"ContainerStarted","Data":"cd3db27faac7c689fb3afb36d70d40262cee8638a2ea0129e23a1e093e5a68fc"} Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.556827 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7grl" event={"ID":"efc370a5-e41c-4eb0-8b79-44a3570cc5a8","Type":"ContainerStarted","Data":"697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d"} Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.579589 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zzqvw" podStartSLOduration=2.624055463 podStartE2EDuration="32.579569633s" podCreationTimestamp="2026-02-16 20:58:56 +0000 UTC" firstStartedPulling="2026-02-16 20:58:58.059761946 +0000 UTC m=+155.989057884" lastFinishedPulling="2026-02-16 20:59:28.015276116 +0000 UTC m=+185.944572054" observedRunningTime="2026-02-16 20:59:28.579234345 +0000 UTC m=+186.508530293" watchObservedRunningTime="2026-02-16 20:59:28.579569633 +0000 UTC m=+186.508865571" Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.600211 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v7grl" podStartSLOduration=3.75989796 podStartE2EDuration="33.600170943s" podCreationTimestamp="2026-02-16 20:58:55 +0000 UTC" firstStartedPulling="2026-02-16 20:58:58.099122389 +0000 UTC m=+156.028418327" lastFinishedPulling="2026-02-16 20:59:27.939395372 +0000 UTC m=+185.868691310" observedRunningTime="2026-02-16 20:59:28.598898561 +0000 UTC m=+186.528194499" watchObservedRunningTime="2026-02-16 20:59:28.600170943 +0000 UTC m=+186.529466881" Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.639781 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7p4t" podStartSLOduration=3.761180833 podStartE2EDuration="33.639758002s" podCreationTimestamp="2026-02-16 20:58:55 +0000 UTC" firstStartedPulling="2026-02-16 20:58:58.07340063 +0000 UTC m=+156.002696568" lastFinishedPulling="2026-02-16 20:59:27.951977799 +0000 UTC m=+185.881273737" observedRunningTime="2026-02-16 20:59:28.6226261 +0000 UTC m=+186.551922038" watchObservedRunningTime="2026-02-16 20:59:28.639758002 +0000 UTC m=+186.569053940" Feb 16 20:59:28 crc kubenswrapper[4811]: I0216 20:59:28.640234 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7nk7k" podStartSLOduration=165.640228204 podStartE2EDuration="2m45.640228204s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:28.636551051 +0000 UTC m=+186.565846989" watchObservedRunningTime="2026-02-16 20:59:28.640228204 +0000 UTC m=+186.569524152" Feb 16 20:59:29 crc kubenswrapper[4811]: I0216 20:59:29.859914 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.328536 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 20:59:35 crc kubenswrapper[4811]: E0216 20:59:35.330823 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b" containerName="pruner" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.330860 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b" containerName="pruner" Feb 16 20:59:35 crc kubenswrapper[4811]: E0216 20:59:35.330904 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50f5bf-5a57-46be-9a83-723597624d23" containerName="pruner" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.330917 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50f5bf-5a57-46be-9a83-723597624d23" containerName="pruner" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.331382 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1af7b5f-49cc-43c5-a81f-b35c1cf0bf3b" containerName="pruner" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.331425 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50f5bf-5a57-46be-9a83-723597624d23" containerName="pruner" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.332128 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.339253 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.339517 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.347662 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.405779 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.405833 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.507075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.507130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.507245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.532166 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.654079 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.803890 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:59:35 crc kubenswrapper[4811]: I0216 20:59:35.808177 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.113939 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.245738 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.246185 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.293793 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.296383 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.601334 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e9b02c1-4869-4d3f-aa19-2f5139428e1a","Type":"ContainerStarted","Data":"19eeaa4aae418fdbb8fc34d6460160478bb4b4759f342ec2082dbbf94a7b3e9f"} Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.659109 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.662787 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.964040 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:59:36 crc kubenswrapper[4811]: I0216 20:59:36.964129 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:59:37 crc kubenswrapper[4811]: I0216 20:59:37.012687 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:59:37 crc kubenswrapper[4811]: I0216 20:59:37.609118 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e9b02c1-4869-4d3f-aa19-2f5139428e1a","Type":"ContainerStarted","Data":"f785b2f9c695f51946948e839e5355d67ed8f1237df39128abdf954710bed8f2"} Feb 16 20:59:37 crc kubenswrapper[4811]: I0216 20:59:37.629613 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.629592656 podStartE2EDuration="2.629592656s" podCreationTimestamp="2026-02-16 20:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:37.627309668 +0000 UTC m=+195.556605606" watchObservedRunningTime="2026-02-16 20:59:37.629592656 +0000 UTC m=+195.558888594" Feb 16 20:59:37 crc kubenswrapper[4811]: I0216 20:59:37.654901 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:59:37 crc kubenswrapper[4811]: I0216 20:59:37.926958 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p4t"] Feb 16 20:59:38 crc kubenswrapper[4811]: I0216 20:59:38.616287 4811 generic.go:334] "Generic (PLEG): container finished" podID="4e9b02c1-4869-4d3f-aa19-2f5139428e1a" containerID="f785b2f9c695f51946948e839e5355d67ed8f1237df39128abdf954710bed8f2" exitCode=0 Feb 16 20:59:38 crc kubenswrapper[4811]: I0216 20:59:38.616397 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e9b02c1-4869-4d3f-aa19-2f5139428e1a","Type":"ContainerDied","Data":"f785b2f9c695f51946948e839e5355d67ed8f1237df39128abdf954710bed8f2"} Feb 16 20:59:38 crc kubenswrapper[4811]: I0216 20:59:38.617424 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7p4t" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="registry-server" containerID="cri-o://9b19ee31e4397a7be01c1b90803166ed1e8680afabf755715b316065bb1c5d18" gracePeriod=2 Feb 16 20:59:39 crc kubenswrapper[4811]: I0216 20:59:39.339673 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzqvw"] Feb 16 20:59:39 crc kubenswrapper[4811]: I0216 20:59:39.637656 4811 generic.go:334] "Generic (PLEG): container finished" podID="0222622d-0fbe-4f15-8a2b-049a68617336" containerID="9b19ee31e4397a7be01c1b90803166ed1e8680afabf755715b316065bb1c5d18" exitCode=0 Feb 16 20:59:39 crc kubenswrapper[4811]: I0216 20:59:39.638031 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zzqvw" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="registry-server" containerID="cri-o://1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638" gracePeriod=2 Feb 16 20:59:39 crc kubenswrapper[4811]: I0216 20:59:39.638640 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p4t" event={"ID":"0222622d-0fbe-4f15-8a2b-049a68617336","Type":"ContainerDied","Data":"9b19ee31e4397a7be01c1b90803166ed1e8680afabf755715b316065bb1c5d18"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.035714 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.087137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-catalog-content\") pod \"0222622d-0fbe-4f15-8a2b-049a68617336\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.087292 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-utilities\") pod \"0222622d-0fbe-4f15-8a2b-049a68617336\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.087423 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcnjc\" (UniqueName: \"kubernetes.io/projected/0222622d-0fbe-4f15-8a2b-049a68617336-kube-api-access-hcnjc\") pod \"0222622d-0fbe-4f15-8a2b-049a68617336\" (UID: \"0222622d-0fbe-4f15-8a2b-049a68617336\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.090019 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-utilities" (OuterVolumeSpecName: "utilities") pod "0222622d-0fbe-4f15-8a2b-049a68617336" (UID: "0222622d-0fbe-4f15-8a2b-049a68617336"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.098447 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0222622d-0fbe-4f15-8a2b-049a68617336-kube-api-access-hcnjc" (OuterVolumeSpecName: "kube-api-access-hcnjc") pod "0222622d-0fbe-4f15-8a2b-049a68617336" (UID: "0222622d-0fbe-4f15-8a2b-049a68617336"). InnerVolumeSpecName "kube-api-access-hcnjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.119780 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0222622d-0fbe-4f15-8a2b-049a68617336" (UID: "0222622d-0fbe-4f15-8a2b-049a68617336"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.146820 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.188817 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kube-api-access\") pod \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.188888 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kubelet-dir\") pod \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\" (UID: \"4e9b02c1-4869-4d3f-aa19-2f5139428e1a\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.189031 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4e9b02c1-4869-4d3f-aa19-2f5139428e1a" (UID: "4e9b02c1-4869-4d3f-aa19-2f5139428e1a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.189241 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcnjc\" (UniqueName: \"kubernetes.io/projected/0222622d-0fbe-4f15-8a2b-049a68617336-kube-api-access-hcnjc\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.189257 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.189293 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.189303 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0222622d-0fbe-4f15-8a2b-049a68617336-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.210409 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4e9b02c1-4869-4d3f-aa19-2f5139428e1a" (UID: "4e9b02c1-4869-4d3f-aa19-2f5139428e1a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.227383 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.290803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-utilities\") pod \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.291326 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-catalog-content\") pod \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.291886 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-utilities" (OuterVolumeSpecName: "utilities") pod "fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" (UID: "fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.292922 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsmh\" (UniqueName: \"kubernetes.io/projected/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-kube-api-access-dbsmh\") pod \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\" (UID: \"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd\") " Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.293826 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e9b02c1-4869-4d3f-aa19-2f5139428e1a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.293939 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.296800 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-kube-api-access-dbsmh" (OuterVolumeSpecName: "kube-api-access-dbsmh") pod "fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" (UID: "fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd"). InnerVolumeSpecName "kube-api-access-dbsmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.395305 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsmh\" (UniqueName: \"kubernetes.io/projected/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-kube-api-access-dbsmh\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.438359 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" (UID: "fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.496621 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.653806 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.656650 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e9b02c1-4869-4d3f-aa19-2f5139428e1a","Type":"ContainerDied","Data":"19eeaa4aae418fdbb8fc34d6460160478bb4b4759f342ec2082dbbf94a7b3e9f"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.656911 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19eeaa4aae418fdbb8fc34d6460160478bb4b4759f342ec2082dbbf94a7b3e9f" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.658326 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p4t" event={"ID":"0222622d-0fbe-4f15-8a2b-049a68617336","Type":"ContainerDied","Data":"e45fbdd2614c47c8b80b762be5cdd9cb6eb280cae2b36c46e9131b751534f242"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.658533 4811 scope.go:117] "RemoveContainer" containerID="9b19ee31e4397a7be01c1b90803166ed1e8680afabf755715b316065bb1c5d18" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.658374 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p4t" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.661124 4811 generic.go:334] "Generic (PLEG): container finished" podID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerID="004ed10147b57fc249111f251071a925b2376570d63d3904463c4bf1507e2ccb" exitCode=0 Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.661354 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9g2" event={"ID":"4f877237-a18d-42d1-9727-d62eb52ea19c","Type":"ContainerDied","Data":"004ed10147b57fc249111f251071a925b2376570d63d3904463c4bf1507e2ccb"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.666110 4811 generic.go:334] "Generic (PLEG): container finished" podID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerID="1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638" exitCode=0 Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.666393 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzqvw" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.666227 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzqvw" event={"ID":"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd","Type":"ContainerDied","Data":"1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.666694 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzqvw" event={"ID":"fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd","Type":"ContainerDied","Data":"9c37520ba6c27aa418e74d428cfcb3034009055a4e845fde19a338d55543ad21"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.671505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerStarted","Data":"645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000"} Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.694828 4811 scope.go:117] "RemoveContainer" containerID="b6104e0c92b744974a536189ea0c6722bfdcf0e2ac1d5f831059618add867255" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.726669 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzqvw"] Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.726712 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zzqvw"] Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.754140 4811 scope.go:117] "RemoveContainer" containerID="e75b8d21573ba94264f17a5cad2472c01ac7bf71914f1cd935c9a0dd9fadd412" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.761396 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p4t"] Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.764899 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p4t"] Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.814251 4811 scope.go:117] "RemoveContainer" containerID="1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.852606 4811 scope.go:117] "RemoveContainer" containerID="2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.921803 4811 scope.go:117] "RemoveContainer" containerID="839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.949453 4811 scope.go:117] "RemoveContainer" containerID="1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638" Feb 16 20:59:40 crc kubenswrapper[4811]: E0216 20:59:40.950161 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638\": container with ID starting with 1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638 not found: ID does not exist" containerID="1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.950233 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638"} err="failed to get container status \"1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638\": rpc error: code = NotFound desc = could not find container \"1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638\": container with ID starting with 1cb1dd3069592f0df5336f8648ecb7717ac1fc860a6a7d782f1e6d34dd031638 not found: ID does not exist" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.950315 4811 scope.go:117] "RemoveContainer" containerID="2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e" Feb 16 20:59:40 crc kubenswrapper[4811]: E0216 20:59:40.951242 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e\": container with ID starting with 2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e not found: ID does not exist" containerID="2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.951298 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e"} err="failed to get container status \"2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e\": rpc error: code = NotFound desc = could not find container \"2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e\": container with ID starting with 2c5a91b1633d9f8f0b1e65f32a2595bf329aff8e6ee2761d1afcea75487bd15e not found: ID does not exist" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.951342 4811 scope.go:117] "RemoveContainer" containerID="839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1" Feb 16 20:59:40 crc kubenswrapper[4811]: E0216 20:59:40.951744 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1\": container with ID starting with 839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1 not found: ID does not exist" containerID="839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1" Feb 16 20:59:40 crc kubenswrapper[4811]: I0216 20:59:40.951774 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1"} err="failed to get container status \"839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1\": rpc error: code = NotFound desc = could not find container \"839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1\": container with ID starting with 839c9faf8d91bf0e7404e1562df8c811deada819cd3e211b803e050cf7cd4dd1 not found: ID does not exist" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.686757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerStarted","Data":"9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7"} Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.692651 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9g2" event={"ID":"4f877237-a18d-42d1-9727-d62eb52ea19c","Type":"ContainerStarted","Data":"fae7944547d632e293c447e9a45e08bbd2eb056223d2c049498333344765313b"} Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.695276 4811 generic.go:334] "Generic (PLEG): container finished" podID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerID="e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb" exitCode=0 Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.695326 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cspmf" event={"ID":"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5","Type":"ContainerDied","Data":"e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb"} Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.699059 4811 generic.go:334] "Generic (PLEG): container finished" podID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerID="1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa" exitCode=0 Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.699138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtdt8" event={"ID":"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1","Type":"ContainerDied","Data":"1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa"} Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.711681 4811 generic.go:334] "Generic (PLEG): container finished" podID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerID="645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000" exitCode=0 Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.711729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerDied","Data":"645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000"} Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721483 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721797 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="extract-content" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721822 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="extract-content" Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721837 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="extract-utilities" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721848 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="extract-utilities" Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721873 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="extract-utilities" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721883 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="extract-utilities" Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721894 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="extract-content" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721901 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="extract-content" Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721911 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="registry-server" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721918 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="registry-server" Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721928 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="registry-server" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721935 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="registry-server" Feb 16 20:59:41 crc kubenswrapper[4811]: E0216 20:59:41.721946 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9b02c1-4869-4d3f-aa19-2f5139428e1a" containerName="pruner" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.721952 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9b02c1-4869-4d3f-aa19-2f5139428e1a" containerName="pruner" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.722059 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" containerName="registry-server" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.722071 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" containerName="registry-server" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.722087 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9b02c1-4869-4d3f-aa19-2f5139428e1a" containerName="pruner" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.722568 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.725272 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.726935 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.746450 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mm9g2" podStartSLOduration=3.368921718 podStartE2EDuration="48.746417094s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:55.8197238 +0000 UTC m=+153.749019738" lastFinishedPulling="2026-02-16 20:59:41.197219136 +0000 UTC m=+199.126515114" observedRunningTime="2026-02-16 20:59:41.730483142 +0000 UTC m=+199.659779090" watchObservedRunningTime="2026-02-16 20:59:41.746417094 +0000 UTC m=+199.675713062" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.802572 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.917813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-var-lock\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.917950 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4811]: I0216 20:59:41.918008 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kube-api-access\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.019397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kube-api-access\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.019504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-var-lock\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.019547 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.019661 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.019701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-var-lock\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.036754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kube-api-access\") pod \"installer-9-crc\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.066893 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.278216 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 20:59:42 crc kubenswrapper[4811]: W0216 20:59:42.294808 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1a78bc8b_a89b_4473_b54b_d0f31ab9ef89.slice/crio-516a99aa9c978559103a3701611ffab29f65c9166aa74e618d0be49b611386ec WatchSource:0}: Error finding container 516a99aa9c978559103a3701611ffab29f65c9166aa74e618d0be49b611386ec: Status 404 returned error can't find the container with id 516a99aa9c978559103a3701611ffab29f65c9166aa74e618d0be49b611386ec Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.723500 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0222622d-0fbe-4f15-8a2b-049a68617336" path="/var/lib/kubelet/pods/0222622d-0fbe-4f15-8a2b-049a68617336/volumes" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.732416 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd" path="/var/lib/kubelet/pods/fa5c03e5-f6e2-4bef-ae66-e85fdb1b54dd/volumes" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.742112 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89","Type":"ContainerStarted","Data":"f0e2491c65691cc4d7eed3544381bb9242e0b5fe408400627c6506bfc34042cb"} Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.742169 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89","Type":"ContainerStarted","Data":"516a99aa9c978559103a3701611ffab29f65c9166aa74e618d0be49b611386ec"} Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.746103 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtdt8" event={"ID":"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1","Type":"ContainerStarted","Data":"49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095"} Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.748046 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerStarted","Data":"f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419"} Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.749842 4811 generic.go:334] "Generic (PLEG): container finished" podID="08f82c33-6a50-480c-b780-e95a09a3e064" containerID="9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7" exitCode=0 Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.749892 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerDied","Data":"9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7"} Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.753032 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cspmf" event={"ID":"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5","Type":"ContainerStarted","Data":"1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2"} Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.759983 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.759962939 podStartE2EDuration="1.759962939s" podCreationTimestamp="2026-02-16 20:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:42.758175634 +0000 UTC m=+200.687471592" watchObservedRunningTime="2026-02-16 20:59:42.759962939 +0000 UTC m=+200.689258877" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.783696 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jtdt8" podStartSLOduration=2.360451072 podStartE2EDuration="49.783663736s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.77060982 +0000 UTC m=+152.699905758" lastFinishedPulling="2026-02-16 20:59:42.193822484 +0000 UTC m=+200.123118422" observedRunningTime="2026-02-16 20:59:42.780716042 +0000 UTC m=+200.710011990" watchObservedRunningTime="2026-02-16 20:59:42.783663736 +0000 UTC m=+200.712959664" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.799717 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cspmf" podStartSLOduration=2.337272588 podStartE2EDuration="49.79969401s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.76821064 +0000 UTC m=+152.697506578" lastFinishedPulling="2026-02-16 20:59:42.230632062 +0000 UTC m=+200.159928000" observedRunningTime="2026-02-16 20:59:42.798815308 +0000 UTC m=+200.728111246" watchObservedRunningTime="2026-02-16 20:59:42.79969401 +0000 UTC m=+200.728989948" Feb 16 20:59:42 crc kubenswrapper[4811]: I0216 20:59:42.820239 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcqch" podStartSLOduration=2.6305523170000003 podStartE2EDuration="46.820217088s" podCreationTimestamp="2026-02-16 20:58:56 +0000 UTC" firstStartedPulling="2026-02-16 20:58:58.053877007 +0000 UTC m=+155.983172945" lastFinishedPulling="2026-02-16 20:59:42.243541778 +0000 UTC m=+200.172837716" observedRunningTime="2026-02-16 20:59:42.819849699 +0000 UTC m=+200.749145647" watchObservedRunningTime="2026-02-16 20:59:42.820217088 +0000 UTC m=+200.749513026" Feb 16 20:59:43 crc kubenswrapper[4811]: I0216 20:59:43.761843 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:59:43 crc kubenswrapper[4811]: I0216 20:59:43.762289 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:59:43 crc kubenswrapper[4811]: I0216 20:59:43.762930 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerStarted","Data":"9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db"} Feb 16 20:59:43 crc kubenswrapper[4811]: I0216 20:59:43.792247 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bz6d7" podStartSLOduration=2.397778163 podStartE2EDuration="50.792223496s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.742021779 +0000 UTC m=+152.671317717" lastFinishedPulling="2026-02-16 20:59:43.136467102 +0000 UTC m=+201.065763050" observedRunningTime="2026-02-16 20:59:43.789508747 +0000 UTC m=+201.718804705" watchObservedRunningTime="2026-02-16 20:59:43.792223496 +0000 UTC m=+201.721519444" Feb 16 20:59:43 crc kubenswrapper[4811]: I0216 20:59:43.948401 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:59:43 crc kubenswrapper[4811]: I0216 20:59:43.948490 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:59:44 crc kubenswrapper[4811]: I0216 20:59:44.223430 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:59:44 crc kubenswrapper[4811]: I0216 20:59:44.223617 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:59:44 crc kubenswrapper[4811]: I0216 20:59:44.276055 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:59:44 crc kubenswrapper[4811]: I0216 20:59:44.805824 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jtdt8" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:44 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:44 crc kubenswrapper[4811]: > Feb 16 20:59:44 crc kubenswrapper[4811]: I0216 20:59:44.999390 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cspmf" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:44 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:44 crc kubenswrapper[4811]: > Feb 16 20:59:46 crc kubenswrapper[4811]: I0216 20:59:46.816713 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:59:46 crc kubenswrapper[4811]: I0216 20:59:46.816812 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:59:47 crc kubenswrapper[4811]: I0216 20:59:47.496392 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" podUID="7ff60cdb-3618-4902-a679-e5bda29c5c60" containerName="oauth-openshift" containerID="cri-o://e644a6a1dbfddeb229d12302f02b8cc0427e7431df471d0aeccd1d9e7939e4f6" gracePeriod=15 Feb 16 20:59:47 crc kubenswrapper[4811]: I0216 20:59:47.795091 4811 generic.go:334] "Generic (PLEG): container finished" podID="7ff60cdb-3618-4902-a679-e5bda29c5c60" containerID="e644a6a1dbfddeb229d12302f02b8cc0427e7431df471d0aeccd1d9e7939e4f6" exitCode=0 Feb 16 20:59:47 crc kubenswrapper[4811]: I0216 20:59:47.795245 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" event={"ID":"7ff60cdb-3618-4902-a679-e5bda29c5c60","Type":"ContainerDied","Data":"e644a6a1dbfddeb229d12302f02b8cc0427e7431df471d0aeccd1d9e7939e4f6"} Feb 16 20:59:47 crc kubenswrapper[4811]: I0216 20:59:47.867559 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqch" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:47 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:47 crc kubenswrapper[4811]: > Feb 16 20:59:47 crc kubenswrapper[4811]: I0216 20:59:47.983146 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.015563 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-6fx2m"] Feb 16 20:59:48 crc kubenswrapper[4811]: E0216 20:59:48.015844 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff60cdb-3618-4902-a679-e5bda29c5c60" containerName="oauth-openshift" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.015859 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff60cdb-3618-4902-a679-e5bda29c5c60" containerName="oauth-openshift" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.015989 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff60cdb-3618-4902-a679-e5bda29c5c60" containerName="oauth-openshift" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.016448 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.032713 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-6fx2m"] Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.116013 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-provider-selection\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.116437 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5qh8\" (UniqueName: \"kubernetes.io/projected/7ff60cdb-3618-4902-a679-e5bda29c5c60-kube-api-access-h5qh8\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-serving-cert\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117785 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-service-ca\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117832 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-idp-0-file-data\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117864 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-error\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117902 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-policies\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117953 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-router-certs\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.117982 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-session\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118027 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-ocp-branding-template\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118054 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-cliconfig\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118113 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-trusted-ca-bundle\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118153 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-login\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118183 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-dir\") pod \"7ff60cdb-3618-4902-a679-e5bda29c5c60\" (UID: \"7ff60cdb-3618-4902-a679-e5bda29c5c60\") " Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118540 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-audit-policies\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118610 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118651 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbfc11de-fd62-40ad-ab48-faa3032e48b0-audit-dir\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118755 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118810 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118862 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118901 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.118935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtmsx\" (UniqueName: \"kubernetes.io/projected/cbfc11de-fd62-40ad-ab48-faa3032e48b0-kube-api-access-qtmsx\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.119037 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.119508 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.120492 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.121067 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.121477 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.125717 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.126723 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.126849 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff60cdb-3618-4902-a679-e5bda29c5c60-kube-api-access-h5qh8" (OuterVolumeSpecName: "kube-api-access-h5qh8") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "kube-api-access-h5qh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.127392 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.127689 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.128405 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.132463 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.133043 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.133248 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.133638 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7ff60cdb-3618-4902-a679-e5bda29c5c60" (UID: "7ff60cdb-3618-4902-a679-e5bda29c5c60"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219759 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219794 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-audit-policies\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219899 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbfc11de-fd62-40ad-ab48-faa3032e48b0-audit-dir\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.219981 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220054 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220077 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220105 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtmsx\" (UniqueName: \"kubernetes.io/projected/cbfc11de-fd62-40ad-ab48-faa3032e48b0-kube-api-access-qtmsx\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220164 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220181 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5qh8\" (UniqueName: \"kubernetes.io/projected/7ff60cdb-3618-4902-a679-e5bda29c5c60-kube-api-access-h5qh8\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220285 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220303 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220323 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220336 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220348 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220361 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220375 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220388 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220401 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220413 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220426 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ff60cdb-3618-4902-a679-e5bda29c5c60-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.220439 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ff60cdb-3618-4902-a679-e5bda29c5c60-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.221226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbfc11de-fd62-40ad-ab48-faa3032e48b0-audit-dir\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.223453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-audit-policies\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.223819 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.224541 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.224501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.225476 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.230113 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.232126 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.232738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.233112 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.233185 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.234566 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.235395 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfc11de-fd62-40ad-ab48-faa3032e48b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.245009 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtmsx\" (UniqueName: \"kubernetes.io/projected/cbfc11de-fd62-40ad-ab48-faa3032e48b0-kube-api-access-qtmsx\") pod \"oauth-openshift-96d6999f9-6fx2m\" (UID: \"cbfc11de-fd62-40ad-ab48-faa3032e48b0\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.364142 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.364311 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.364382 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.365258 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.365345 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba" gracePeriod=600 Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.368958 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.618513 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-6fx2m"] Feb 16 20:59:48 crc kubenswrapper[4811]: W0216 20:59:48.631537 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbfc11de_fd62_40ad_ab48_faa3032e48b0.slice/crio-3a6b41757464b5d442aceccc07fac17d7028eb0762ea28ebe85d7650e88f8019 WatchSource:0}: Error finding container 3a6b41757464b5d442aceccc07fac17d7028eb0762ea28ebe85d7650e88f8019: Status 404 returned error can't find the container with id 3a6b41757464b5d442aceccc07fac17d7028eb0762ea28ebe85d7650e88f8019 Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.812139 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba" exitCode=0 Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.812247 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba"} Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.812309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"511f95f6a6799c704fdd7e32c1371b422a6e981f14147fd4c29d440cdf6c2331"} Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.819535 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.819587 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4f8kg" event={"ID":"7ff60cdb-3618-4902-a679-e5bda29c5c60","Type":"ContainerDied","Data":"62928bbecb86564d117b12f09538af4aad8164c5be93b1a33e7d4271a1d27eee"} Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.819688 4811 scope.go:117] "RemoveContainer" containerID="e644a6a1dbfddeb229d12302f02b8cc0427e7431df471d0aeccd1d9e7939e4f6" Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.821448 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" event={"ID":"cbfc11de-fd62-40ad-ab48-faa3032e48b0","Type":"ContainerStarted","Data":"3a6b41757464b5d442aceccc07fac17d7028eb0762ea28ebe85d7650e88f8019"} Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.854640 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4f8kg"] Feb 16 20:59:48 crc kubenswrapper[4811]: I0216 20:59:48.859791 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4f8kg"] Feb 16 20:59:49 crc kubenswrapper[4811]: I0216 20:59:49.836162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" event={"ID":"cbfc11de-fd62-40ad-ab48-faa3032e48b0","Type":"ContainerStarted","Data":"350ae20e6a68112ac54f1e1255a5122276c6cd2c4fddea79f755f18eae4bcef0"} Feb 16 20:59:49 crc kubenswrapper[4811]: I0216 20:59:49.836844 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:49 crc kubenswrapper[4811]: I0216 20:59:49.847810 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" Feb 16 20:59:49 crc kubenswrapper[4811]: I0216 20:59:49.873375 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-96d6999f9-6fx2m" podStartSLOduration=27.87333914 podStartE2EDuration="27.87333914s" podCreationTimestamp="2026-02-16 20:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:49.871153405 +0000 UTC m=+207.800449373" watchObservedRunningTime="2026-02-16 20:59:49.87333914 +0000 UTC m=+207.802635118" Feb 16 20:59:50 crc kubenswrapper[4811]: I0216 20:59:50.711226 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff60cdb-3618-4902-a679-e5bda29c5c60" path="/var/lib/kubelet/pods/7ff60cdb-3618-4902-a679-e5bda29c5c60/volumes" Feb 16 20:59:53 crc kubenswrapper[4811]: I0216 20:59:53.547958 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:59:53 crc kubenswrapper[4811]: I0216 20:59:53.548623 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:59:53 crc kubenswrapper[4811]: I0216 20:59:53.621148 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:59:53 crc kubenswrapper[4811]: I0216 20:59:53.822514 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:59:53 crc kubenswrapper[4811]: I0216 20:59:53.880084 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 20:59:53 crc kubenswrapper[4811]: I0216 20:59:53.922948 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 20:59:54 crc kubenswrapper[4811]: I0216 20:59:54.011273 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:59:54 crc kubenswrapper[4811]: I0216 20:59:54.056489 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:59:54 crc kubenswrapper[4811]: I0216 20:59:54.294405 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:59:55 crc kubenswrapper[4811]: I0216 20:59:55.731119 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm9g2"] Feb 16 20:59:55 crc kubenswrapper[4811]: I0216 20:59:55.732112 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mm9g2" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="registry-server" containerID="cri-o://fae7944547d632e293c447e9a45e08bbd2eb056223d2c049498333344765313b" gracePeriod=2 Feb 16 20:59:55 crc kubenswrapper[4811]: I0216 20:59:55.883476 4811 generic.go:334] "Generic (PLEG): container finished" podID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerID="fae7944547d632e293c447e9a45e08bbd2eb056223d2c049498333344765313b" exitCode=0 Feb 16 20:59:55 crc kubenswrapper[4811]: I0216 20:59:55.883538 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9g2" event={"ID":"4f877237-a18d-42d1-9727-d62eb52ea19c","Type":"ContainerDied","Data":"fae7944547d632e293c447e9a45e08bbd2eb056223d2c049498333344765313b"} Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.208244 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.241563 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2dq6\" (UniqueName: \"kubernetes.io/projected/4f877237-a18d-42d1-9727-d62eb52ea19c-kube-api-access-h2dq6\") pod \"4f877237-a18d-42d1-9727-d62eb52ea19c\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.241868 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-utilities\") pod \"4f877237-a18d-42d1-9727-d62eb52ea19c\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.241969 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-catalog-content\") pod \"4f877237-a18d-42d1-9727-d62eb52ea19c\" (UID: \"4f877237-a18d-42d1-9727-d62eb52ea19c\") " Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.246993 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-utilities" (OuterVolumeSpecName: "utilities") pod "4f877237-a18d-42d1-9727-d62eb52ea19c" (UID: "4f877237-a18d-42d1-9727-d62eb52ea19c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.255630 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f877237-a18d-42d1-9727-d62eb52ea19c-kube-api-access-h2dq6" (OuterVolumeSpecName: "kube-api-access-h2dq6") pod "4f877237-a18d-42d1-9727-d62eb52ea19c" (UID: "4f877237-a18d-42d1-9727-d62eb52ea19c"). InnerVolumeSpecName "kube-api-access-h2dq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.318328 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f877237-a18d-42d1-9727-d62eb52ea19c" (UID: "4f877237-a18d-42d1-9727-d62eb52ea19c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.329721 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cspmf"] Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.330046 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cspmf" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="registry-server" containerID="cri-o://1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2" gracePeriod=2 Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.345303 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2dq6\" (UniqueName: \"kubernetes.io/projected/4f877237-a18d-42d1-9727-d62eb52ea19c-kube-api-access-h2dq6\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.345361 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.345375 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f877237-a18d-42d1-9727-d62eb52ea19c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.659535 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.851556 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-utilities\") pod \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.851767 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5lzl\" (UniqueName: \"kubernetes.io/projected/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-kube-api-access-p5lzl\") pod \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.851823 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-catalog-content\") pod \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\" (UID: \"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5\") " Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.852355 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-utilities" (OuterVolumeSpecName: "utilities") pod "14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" (UID: "14b78b5a-3cbf-4b80-8831-8f522bf2a2e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.855146 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-kube-api-access-p5lzl" (OuterVolumeSpecName: "kube-api-access-p5lzl") pod "14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" (UID: "14b78b5a-3cbf-4b80-8831-8f522bf2a2e5"). InnerVolumeSpecName "kube-api-access-p5lzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.869011 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.899720 4811 generic.go:334] "Generic (PLEG): container finished" podID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerID="1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2" exitCode=0 Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.899807 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cspmf" event={"ID":"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5","Type":"ContainerDied","Data":"1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2"} Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.899847 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cspmf" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.899888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cspmf" event={"ID":"14b78b5a-3cbf-4b80-8831-8f522bf2a2e5","Type":"ContainerDied","Data":"fb44bd4f132507c7671248487dacbc2bf59b357b07f10e3c88b15bc0162eb367"} Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.899918 4811 scope.go:117] "RemoveContainer" containerID="1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.906718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9g2" event={"ID":"4f877237-a18d-42d1-9727-d62eb52ea19c","Type":"ContainerDied","Data":"9cce0de1aab578aca5c6d2c48f05f4c5d401d911417c6b26aa228814c985b23b"} Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.906874 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9g2" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.909581 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" (UID: "14b78b5a-3cbf-4b80-8831-8f522bf2a2e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.928625 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.931355 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm9g2"] Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.936932 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mm9g2"] Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.938720 4811 scope.go:117] "RemoveContainer" containerID="e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.955084 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5lzl\" (UniqueName: \"kubernetes.io/projected/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-kube-api-access-p5lzl\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.955151 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.955165 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:56 crc kubenswrapper[4811]: I0216 20:59:56.973531 4811 scope.go:117] "RemoveContainer" containerID="2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.005620 4811 scope.go:117] "RemoveContainer" containerID="1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2" Feb 16 20:59:57 crc kubenswrapper[4811]: E0216 20:59:57.006106 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2\": container with ID starting with 1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2 not found: ID does not exist" containerID="1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.006163 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2"} err="failed to get container status \"1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2\": rpc error: code = NotFound desc = could not find container \"1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2\": container with ID starting with 1715b47543948f2c9fecbdf94e1ae8ad0f3a407e84380ad1bab420db47d362f2 not found: ID does not exist" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.006216 4811 scope.go:117] "RemoveContainer" containerID="e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb" Feb 16 20:59:57 crc kubenswrapper[4811]: E0216 20:59:57.006553 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb\": container with ID starting with e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb not found: ID does not exist" containerID="e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.006598 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb"} err="failed to get container status \"e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb\": rpc error: code = NotFound desc = could not find container \"e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb\": container with ID starting with e12469ee3501da4861b4a03b831fff9d12aac1091b232355e1fc6be51f0524cb not found: ID does not exist" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.006630 4811 scope.go:117] "RemoveContainer" containerID="2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4" Feb 16 20:59:57 crc kubenswrapper[4811]: E0216 20:59:57.006923 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4\": container with ID starting with 2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4 not found: ID does not exist" containerID="2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.006951 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4"} err="failed to get container status \"2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4\": rpc error: code = NotFound desc = could not find container \"2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4\": container with ID starting with 2d73531fff2b16ff1a74d2ff2c9cba3411cc677c837f4a31672f72518b049bb4 not found: ID does not exist" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.006968 4811 scope.go:117] "RemoveContainer" containerID="fae7944547d632e293c447e9a45e08bbd2eb056223d2c049498333344765313b" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.023250 4811 scope.go:117] "RemoveContainer" containerID="004ed10147b57fc249111f251071a925b2376570d63d3904463c4bf1507e2ccb" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.041417 4811 scope.go:117] "RemoveContainer" containerID="8a64a71cce89bc7916b6342721ea2f8e7e45dbc673236e1deeaf635f15d6b407" Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.238283 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cspmf"] Feb 16 20:59:57 crc kubenswrapper[4811]: I0216 20:59:57.240859 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cspmf"] Feb 16 20:59:58 crc kubenswrapper[4811]: I0216 20:59:58.713232 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" path="/var/lib/kubelet/pods/14b78b5a-3cbf-4b80-8831-8f522bf2a2e5/volumes" Feb 16 20:59:58 crc kubenswrapper[4811]: I0216 20:59:58.713880 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" path="/var/lib/kubelet/pods/4f877237-a18d-42d1-9727-d62eb52ea19c/volumes" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.162655 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l"] Feb 16 21:00:00 crc kubenswrapper[4811]: E0216 21:00:00.164217 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="extract-content" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.164308 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="extract-content" Feb 16 21:00:00 crc kubenswrapper[4811]: E0216 21:00:00.164432 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="registry-server" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.164522 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="registry-server" Feb 16 21:00:00 crc kubenswrapper[4811]: E0216 21:00:00.164583 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="extract-utilities" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.164644 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="extract-utilities" Feb 16 21:00:00 crc kubenswrapper[4811]: E0216 21:00:00.164733 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="extract-utilities" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.164799 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="extract-utilities" Feb 16 21:00:00 crc kubenswrapper[4811]: E0216 21:00:00.164864 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="extract-content" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.164925 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="extract-content" Feb 16 21:00:00 crc kubenswrapper[4811]: E0216 21:00:00.164988 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="registry-server" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.165040 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="registry-server" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.165251 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f877237-a18d-42d1-9727-d62eb52ea19c" containerName="registry-server" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.165331 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b78b5a-3cbf-4b80-8831-8f522bf2a2e5" containerName="registry-server" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.165931 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.169997 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.170909 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.172457 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l"] Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.306759 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689v2\" (UniqueName: \"kubernetes.io/projected/4b12cc0f-d02f-4db6-8937-190156d483ff-kube-api-access-689v2\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.306813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b12cc0f-d02f-4db6-8937-190156d483ff-config-volume\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.307228 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b12cc0f-d02f-4db6-8937-190156d483ff-secret-volume\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.408002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689v2\" (UniqueName: \"kubernetes.io/projected/4b12cc0f-d02f-4db6-8937-190156d483ff-kube-api-access-689v2\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.408060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b12cc0f-d02f-4db6-8937-190156d483ff-config-volume\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.408122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b12cc0f-d02f-4db6-8937-190156d483ff-secret-volume\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.409738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b12cc0f-d02f-4db6-8937-190156d483ff-config-volume\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.418573 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b12cc0f-d02f-4db6-8937-190156d483ff-secret-volume\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.425663 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689v2\" (UniqueName: \"kubernetes.io/projected/4b12cc0f-d02f-4db6-8937-190156d483ff-kube-api-access-689v2\") pod \"collect-profiles-29521260-8xv4l\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.499994 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.725042 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l"] Feb 16 21:00:00 crc kubenswrapper[4811]: W0216 21:00:00.740637 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b12cc0f_d02f_4db6_8937_190156d483ff.slice/crio-33591f658cf92adbb748d8a74cd285218fd6193ec16fed55bceb2c2c8d298b75 WatchSource:0}: Error finding container 33591f658cf92adbb748d8a74cd285218fd6193ec16fed55bceb2c2c8d298b75: Status 404 returned error can't find the container with id 33591f658cf92adbb748d8a74cd285218fd6193ec16fed55bceb2c2c8d298b75 Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.942856 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" event={"ID":"4b12cc0f-d02f-4db6-8937-190156d483ff","Type":"ContainerStarted","Data":"6e45156d038066e6f2d7bde1687a7eb781aaedc086dcb461f6b73c515df55a28"} Feb 16 21:00:00 crc kubenswrapper[4811]: I0216 21:00:00.942928 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" event={"ID":"4b12cc0f-d02f-4db6-8937-190156d483ff","Type":"ContainerStarted","Data":"33591f658cf92adbb748d8a74cd285218fd6193ec16fed55bceb2c2c8d298b75"} Feb 16 21:00:01 crc kubenswrapper[4811]: I0216 21:00:01.949721 4811 generic.go:334] "Generic (PLEG): container finished" podID="4b12cc0f-d02f-4db6-8937-190156d483ff" containerID="6e45156d038066e6f2d7bde1687a7eb781aaedc086dcb461f6b73c515df55a28" exitCode=0 Feb 16 21:00:01 crc kubenswrapper[4811]: I0216 21:00:01.949798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" event={"ID":"4b12cc0f-d02f-4db6-8937-190156d483ff","Type":"ContainerDied","Data":"6e45156d038066e6f2d7bde1687a7eb781aaedc086dcb461f6b73c515df55a28"} Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.287154 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.452760 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b12cc0f-d02f-4db6-8937-190156d483ff-config-volume\") pod \"4b12cc0f-d02f-4db6-8937-190156d483ff\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.452899 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b12cc0f-d02f-4db6-8937-190156d483ff-secret-volume\") pod \"4b12cc0f-d02f-4db6-8937-190156d483ff\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.452948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-689v2\" (UniqueName: \"kubernetes.io/projected/4b12cc0f-d02f-4db6-8937-190156d483ff-kube-api-access-689v2\") pod \"4b12cc0f-d02f-4db6-8937-190156d483ff\" (UID: \"4b12cc0f-d02f-4db6-8937-190156d483ff\") " Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.454778 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b12cc0f-d02f-4db6-8937-190156d483ff-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b12cc0f-d02f-4db6-8937-190156d483ff" (UID: "4b12cc0f-d02f-4db6-8937-190156d483ff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.482983 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b12cc0f-d02f-4db6-8937-190156d483ff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b12cc0f-d02f-4db6-8937-190156d483ff" (UID: "4b12cc0f-d02f-4db6-8937-190156d483ff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.483219 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b12cc0f-d02f-4db6-8937-190156d483ff-kube-api-access-689v2" (OuterVolumeSpecName: "kube-api-access-689v2") pod "4b12cc0f-d02f-4db6-8937-190156d483ff" (UID: "4b12cc0f-d02f-4db6-8937-190156d483ff"). InnerVolumeSpecName "kube-api-access-689v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.581132 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b12cc0f-d02f-4db6-8937-190156d483ff-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.581205 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b12cc0f-d02f-4db6-8937-190156d483ff-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.581223 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-689v2\" (UniqueName: \"kubernetes.io/projected/4b12cc0f-d02f-4db6-8937-190156d483ff-kube-api-access-689v2\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.962857 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" event={"ID":"4b12cc0f-d02f-4db6-8937-190156d483ff","Type":"ContainerDied","Data":"33591f658cf92adbb748d8a74cd285218fd6193ec16fed55bceb2c2c8d298b75"} Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.962907 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33591f658cf92adbb748d8a74cd285218fd6193ec16fed55bceb2c2c8d298b75" Feb 16 21:00:03 crc kubenswrapper[4811]: I0216 21:00:03.962913 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.492295 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtdt8"] Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.499261 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz6d7"] Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.499570 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bz6d7" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="registry-server" containerID="cri-o://9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.501601 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jtdt8" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="registry-server" containerID="cri-o://49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.506748 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n8rd6"] Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.507045 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" containerID="cri-o://cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.518109 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7grl"] Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.519179 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v7grl" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="registry-server" containerID="cri-o://697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.540370 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqch"] Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.540842 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcqch" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="registry-server" containerID="cri-o://f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.546839 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwhxj"] Feb 16 21:00:12 crc kubenswrapper[4811]: E0216 21:00:12.547208 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b12cc0f-d02f-4db6-8937-190156d483ff" containerName="collect-profiles" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.547229 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b12cc0f-d02f-4db6-8937-190156d483ff" containerName="collect-profiles" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.547371 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b12cc0f-d02f-4db6-8937-190156d483ff" containerName="collect-profiles" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.547955 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.561444 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwhxj"] Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.604502 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dbdae02-959b-48e1-9297-c76789cdb528-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.604591 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlngj\" (UniqueName: \"kubernetes.io/projected/6dbdae02-959b-48e1-9297-c76789cdb528-kube-api-access-xlngj\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.604622 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dbdae02-959b-48e1-9297-c76789cdb528-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.706312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dbdae02-959b-48e1-9297-c76789cdb528-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.706379 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dbdae02-959b-48e1-9297-c76789cdb528-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.706420 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlngj\" (UniqueName: \"kubernetes.io/projected/6dbdae02-959b-48e1-9297-c76789cdb528-kube-api-access-xlngj\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.708411 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dbdae02-959b-48e1-9297-c76789cdb528-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.712770 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dbdae02-959b-48e1-9297-c76789cdb528-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.743543 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlngj\" (UniqueName: \"kubernetes.io/projected/6dbdae02-959b-48e1-9297-c76789cdb528-kube-api-access-xlngj\") pod \"marketplace-operator-79b997595-gwhxj\" (UID: \"6dbdae02-959b-48e1-9297-c76789cdb528\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.905610 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.916591 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 21:00:12 crc kubenswrapper[4811]: I0216 21:00:12.991167 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:12.997314 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.006536 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.012309 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-catalog-content\") pod \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.012575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278zz\" (UniqueName: \"kubernetes.io/projected/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-kube-api-access-278zz\") pod \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.012611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-utilities\") pod \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\" (UID: \"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.016915 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-utilities" (OuterVolumeSpecName: "utilities") pod "aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" (UID: "aabb6f4a-05fd-4f4f-9211-81884fdd4bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.020117 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-kube-api-access-278zz" (OuterVolumeSpecName: "kube-api-access-278zz") pod "aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" (UID: "aabb6f4a-05fd-4f4f-9211-81884fdd4bb1"). InnerVolumeSpecName "kube-api-access-278zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.026717 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.074327 4811 generic.go:334] "Generic (PLEG): container finished" podID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerID="697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.074428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7grl" event={"ID":"efc370a5-e41c-4eb0-8b79-44a3570cc5a8","Type":"ContainerDied","Data":"697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.074487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7grl" event={"ID":"efc370a5-e41c-4eb0-8b79-44a3570cc5a8","Type":"ContainerDied","Data":"4cfb8c7d83ee04b0c7caa885ed7f185506559485e0fa9008407c35140e51cac6"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.074545 4811 scope.go:117] "RemoveContainer" containerID="697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.074706 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7grl" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.081304 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.081394 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" event={"ID":"45722898-287e-4a8e-8816-5928e178d2d7","Type":"ContainerDied","Data":"cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.081556 4811 generic.go:334] "Generic (PLEG): container finished" podID="45722898-287e-4a8e-8816-5928e178d2d7" containerID="cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.082593 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n8rd6" event={"ID":"45722898-287e-4a8e-8816-5928e178d2d7","Type":"ContainerDied","Data":"a353a9621f9f9993210bbd91ac6db505905884818960ecd71200c85eeeb4a3b8"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.099011 4811 generic.go:334] "Generic (PLEG): container finished" podID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerID="f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.099070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerDied","Data":"f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.099135 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqch" event={"ID":"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7","Type":"ContainerDied","Data":"713839fb06a3be82a10ece9b92dbcdbc1b88103fb4c7cc4d38294f9cc0877b57"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.099132 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqch" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.107524 4811 scope.go:117] "RemoveContainer" containerID="bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.109551 4811 generic.go:334] "Generic (PLEG): container finished" podID="08f82c33-6a50-480c-b780-e95a09a3e064" containerID="9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.109626 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz6d7" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.109653 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerDied","Data":"9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.109708 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz6d7" event={"ID":"08f82c33-6a50-480c-b780-e95a09a3e064","Type":"ContainerDied","Data":"d23045af8c3156f7285052ed97eec053caa6abfa3669a6037f5637c76e512cb7"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113477 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/45722898-287e-4a8e-8816-5928e178d2d7-kube-api-access-kqmxf\") pod \"45722898-287e-4a8e-8816-5928e178d2d7\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113520 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-utilities\") pod \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113551 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-utilities\") pod \"08f82c33-6a50-480c-b780-e95a09a3e064\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113579 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics\") pod \"45722898-287e-4a8e-8816-5928e178d2d7\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113609 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj8jt\" (UniqueName: \"kubernetes.io/projected/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-kube-api-access-vj8jt\") pod \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-catalog-content\") pod \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113716 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58xmv\" (UniqueName: \"kubernetes.io/projected/08f82c33-6a50-480c-b780-e95a09a3e064-kube-api-access-58xmv\") pod \"08f82c33-6a50-480c-b780-e95a09a3e064\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113742 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca\") pod \"45722898-287e-4a8e-8816-5928e178d2d7\" (UID: \"45722898-287e-4a8e-8816-5928e178d2d7\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-utilities\") pod \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113824 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-catalog-content\") pod \"08f82c33-6a50-480c-b780-e95a09a3e064\" (UID: \"08f82c33-6a50-480c-b780-e95a09a3e064\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113852 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlqt\" (UniqueName: \"kubernetes.io/projected/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-kube-api-access-lwlqt\") pod \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\" (UID: \"764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.113886 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-catalog-content\") pod \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\" (UID: \"efc370a5-e41c-4eb0-8b79-44a3570cc5a8\") " Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.114100 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278zz\" (UniqueName: \"kubernetes.io/projected/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-kube-api-access-278zz\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.114115 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.114920 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-utilities" (OuterVolumeSpecName: "utilities") pod "08f82c33-6a50-480c-b780-e95a09a3e064" (UID: "08f82c33-6a50-480c-b780-e95a09a3e064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.115108 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-utilities" (OuterVolumeSpecName: "utilities") pod "764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" (UID: "764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.116713 4811 generic.go:334] "Generic (PLEG): container finished" podID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerID="49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.116762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtdt8" event={"ID":"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1","Type":"ContainerDied","Data":"49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.116795 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtdt8" event={"ID":"aabb6f4a-05fd-4f4f-9211-81884fdd4bb1","Type":"ContainerDied","Data":"f3f5d76128210bc1b30a7d7d1212971b13df018abe458cb59dfe60370a506368"} Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.116872 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtdt8" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.117130 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "45722898-287e-4a8e-8816-5928e178d2d7" (UID: "45722898-287e-4a8e-8816-5928e178d2d7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.120105 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "45722898-287e-4a8e-8816-5928e178d2d7" (UID: "45722898-287e-4a8e-8816-5928e178d2d7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.120973 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-kube-api-access-lwlqt" (OuterVolumeSpecName: "kube-api-access-lwlqt") pod "764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" (UID: "764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7"). InnerVolumeSpecName "kube-api-access-lwlqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.121067 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45722898-287e-4a8e-8816-5928e178d2d7-kube-api-access-kqmxf" (OuterVolumeSpecName: "kube-api-access-kqmxf") pod "45722898-287e-4a8e-8816-5928e178d2d7" (UID: "45722898-287e-4a8e-8816-5928e178d2d7"). InnerVolumeSpecName "kube-api-access-kqmxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.127140 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f82c33-6a50-480c-b780-e95a09a3e064-kube-api-access-58xmv" (OuterVolumeSpecName: "kube-api-access-58xmv") pod "08f82c33-6a50-480c-b780-e95a09a3e064" (UID: "08f82c33-6a50-480c-b780-e95a09a3e064"). InnerVolumeSpecName "kube-api-access-58xmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.128711 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-utilities" (OuterVolumeSpecName: "utilities") pod "efc370a5-e41c-4eb0-8b79-44a3570cc5a8" (UID: "efc370a5-e41c-4eb0-8b79-44a3570cc5a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.130242 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-kube-api-access-vj8jt" (OuterVolumeSpecName: "kube-api-access-vj8jt") pod "efc370a5-e41c-4eb0-8b79-44a3570cc5a8" (UID: "efc370a5-e41c-4eb0-8b79-44a3570cc5a8"). InnerVolumeSpecName "kube-api-access-vj8jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.132974 4811 scope.go:117] "RemoveContainer" containerID="847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.133291 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" (UID: "aabb6f4a-05fd-4f4f-9211-81884fdd4bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.161393 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efc370a5-e41c-4eb0-8b79-44a3570cc5a8" (UID: "efc370a5-e41c-4eb0-8b79-44a3570cc5a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.161507 4811 scope.go:117] "RemoveContainer" containerID="697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.162210 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d\": container with ID starting with 697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d not found: ID does not exist" containerID="697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.162280 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d"} err="failed to get container status \"697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d\": rpc error: code = NotFound desc = could not find container \"697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d\": container with ID starting with 697ee72d959c3a00cd54d32881ab0497cfbaaf36936d20fa030284b6dc61de9d not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.162321 4811 scope.go:117] "RemoveContainer" containerID="bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.162985 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9\": container with ID starting with bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9 not found: ID does not exist" containerID="bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.163031 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9"} err="failed to get container status \"bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9\": rpc error: code = NotFound desc = could not find container \"bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9\": container with ID starting with bc18a83c104f380d6b90c4f16bcc887ba498ba8ca301c5600faa834c01ffc3a9 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.163067 4811 scope.go:117] "RemoveContainer" containerID="847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.164036 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab\": container with ID starting with 847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab not found: ID does not exist" containerID="847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.164057 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab"} err="failed to get container status \"847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab\": rpc error: code = NotFound desc = could not find container \"847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab\": container with ID starting with 847549fe5ea4d1a4626c3b7135c48525d0f27d4e57599cf5ae346ef0c0fa00ab not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.164078 4811 scope.go:117] "RemoveContainer" containerID="cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.181507 4811 scope.go:117] "RemoveContainer" containerID="cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.182014 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0\": container with ID starting with cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0 not found: ID does not exist" containerID="cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.182053 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0"} err="failed to get container status \"cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0\": rpc error: code = NotFound desc = could not find container \"cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0\": container with ID starting with cb076ec88ec0da6641a861c9eed260017597619ea452ed5975ab0a12643cf3f0 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.182096 4811 scope.go:117] "RemoveContainer" containerID="f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.199044 4811 scope.go:117] "RemoveContainer" containerID="645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.211747 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08f82c33-6a50-480c-b780-e95a09a3e064" (UID: "08f82c33-6a50-480c-b780-e95a09a3e064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215896 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215933 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215950 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj8jt\" (UniqueName: \"kubernetes.io/projected/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-kube-api-access-vj8jt\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215959 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58xmv\" (UniqueName: \"kubernetes.io/projected/08f82c33-6a50-480c-b780-e95a09a3e064-kube-api-access-58xmv\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215968 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45722898-287e-4a8e-8816-5928e178d2d7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215977 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215986 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f82c33-6a50-480c-b780-e95a09a3e064-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215984 4811 scope.go:117] "RemoveContainer" containerID="fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.215997 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwlqt\" (UniqueName: \"kubernetes.io/projected/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-kube-api-access-lwlqt\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.216114 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.216123 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efc370a5-e41c-4eb0-8b79-44a3570cc5a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.216131 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/45722898-287e-4a8e-8816-5928e178d2d7-kube-api-access-kqmxf\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.216139 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.230799 4811 scope.go:117] "RemoveContainer" containerID="f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.232343 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419\": container with ID starting with f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419 not found: ID does not exist" containerID="f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.232386 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419"} err="failed to get container status \"f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419\": rpc error: code = NotFound desc = could not find container \"f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419\": container with ID starting with f6d381b9dc856e6f628555646fc069a4f2b7017be1f30be07b4c2b61a0d2e419 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.232419 4811 scope.go:117] "RemoveContainer" containerID="645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.232757 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000\": container with ID starting with 645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000 not found: ID does not exist" containerID="645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.232817 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000"} err="failed to get container status \"645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000\": rpc error: code = NotFound desc = could not find container \"645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000\": container with ID starting with 645a13cfa16a206ca18eb438cc5e0e9066d881a5a7677123b31039c3d465a000 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.232857 4811 scope.go:117] "RemoveContainer" containerID="fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.233164 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b\": container with ID starting with fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b not found: ID does not exist" containerID="fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.233217 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b"} err="failed to get container status \"fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b\": rpc error: code = NotFound desc = could not find container \"fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b\": container with ID starting with fc39fd583496d29361ce16910ea53656d56fd815d1b30fdb3472e49d5f614c7b not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.233257 4811 scope.go:117] "RemoveContainer" containerID="9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.245925 4811 scope.go:117] "RemoveContainer" containerID="9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.260170 4811 scope.go:117] "RemoveContainer" containerID="e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.282601 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" (UID: "764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.283769 4811 scope.go:117] "RemoveContainer" containerID="9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.284236 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db\": container with ID starting with 9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db not found: ID does not exist" containerID="9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.284272 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db"} err="failed to get container status \"9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db\": rpc error: code = NotFound desc = could not find container \"9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db\": container with ID starting with 9921b8d100bce93c518be62769dbd5dff48adef7b9f43c76778a56d4aa3409db not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.284301 4811 scope.go:117] "RemoveContainer" containerID="9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.284816 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7\": container with ID starting with 9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7 not found: ID does not exist" containerID="9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.284873 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7"} err="failed to get container status \"9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7\": rpc error: code = NotFound desc = could not find container \"9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7\": container with ID starting with 9bec512905f2499ffe923aba2f2909de5d87db8b9770dc9a2426c2d2136260b7 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.284910 4811 scope.go:117] "RemoveContainer" containerID="e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.285754 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5\": container with ID starting with e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5 not found: ID does not exist" containerID="e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.285786 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5"} err="failed to get container status \"e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5\": rpc error: code = NotFound desc = could not find container \"e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5\": container with ID starting with e2047ca314897ff6dc0045ec7ca070154c61ada89c7ad5c17565a6a4a1bc79f5 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.285803 4811 scope.go:117] "RemoveContainer" containerID="49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.298003 4811 scope.go:117] "RemoveContainer" containerID="1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.317339 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.318111 4811 scope.go:117] "RemoveContainer" containerID="a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.331094 4811 scope.go:117] "RemoveContainer" containerID="49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.331585 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095\": container with ID starting with 49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095 not found: ID does not exist" containerID="49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.331624 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095"} err="failed to get container status \"49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095\": rpc error: code = NotFound desc = could not find container \"49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095\": container with ID starting with 49c6017b46f3b85465f2b07a9845fd3e90ff35bd33cca1068eeb82185601f095 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.331655 4811 scope.go:117] "RemoveContainer" containerID="1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.332022 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa\": container with ID starting with 1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa not found: ID does not exist" containerID="1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.332087 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa"} err="failed to get container status \"1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa\": rpc error: code = NotFound desc = could not find container \"1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa\": container with ID starting with 1de0bef2b8b35d779904c1206a0a58d6bcec3115a59425668263efbe11158eaa not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.332132 4811 scope.go:117] "RemoveContainer" containerID="a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8" Feb 16 21:00:13 crc kubenswrapper[4811]: E0216 21:00:13.332511 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8\": container with ID starting with a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8 not found: ID does not exist" containerID="a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.332545 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8"} err="failed to get container status \"a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8\": rpc error: code = NotFound desc = could not find container \"a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8\": container with ID starting with a6d1b88d499e352ea987eefe628ab414d758a7bad784b3bf7dd40bf87052d9d8 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.413688 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7grl"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.419304 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwhxj"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.424031 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7grl"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.428107 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n8rd6"] Feb 16 21:00:13 crc kubenswrapper[4811]: W0216 21:00:13.430808 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbdae02_959b_48e1_9297_c76789cdb528.slice/crio-66fd4671ca72358566f1c1693cb4e0a4ccfcb3e83ab75c75a58b3609eee75a2c WatchSource:0}: Error finding container 66fd4671ca72358566f1c1693cb4e0a4ccfcb3e83ab75c75a58b3609eee75a2c: Status 404 returned error can't find the container with id 66fd4671ca72358566f1c1693cb4e0a4ccfcb3e83ab75c75a58b3609eee75a2c Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.431996 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n8rd6"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.467326 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz6d7"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.485695 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bz6d7"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.485802 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqch"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.488099 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcqch"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.490446 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtdt8"] Feb 16 21:00:13 crc kubenswrapper[4811]: I0216 21:00:13.492631 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jtdt8"] Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.123842 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" event={"ID":"6dbdae02-959b-48e1-9297-c76789cdb528","Type":"ContainerStarted","Data":"371f84d07100e16c5e37b8efc6a516d8ad2f5a8ce63b4c50b808c392f28ae89d"} Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.125534 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.125557 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" event={"ID":"6dbdae02-959b-48e1-9297-c76789cdb528","Type":"ContainerStarted","Data":"66fd4671ca72358566f1c1693cb4e0a4ccfcb3e83ab75c75a58b3609eee75a2c"} Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.128783 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.178018 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gwhxj" podStartSLOduration=2.177993961 podStartE2EDuration="2.177993961s" podCreationTimestamp="2026-02-16 21:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:00:14.151223176 +0000 UTC m=+232.080519154" watchObservedRunningTime="2026-02-16 21:00:14.177993961 +0000 UTC m=+232.107289909" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693285 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rfd2c"] Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693584 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693606 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693623 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693632 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693645 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693654 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693662 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693669 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693679 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693686 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693698 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693706 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693718 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693726 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693740 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693748 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693759 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693769 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693780 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693787 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693798 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693807 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693817 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693824 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4811]: E0216 21:00:14.693833 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693840 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693954 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693966 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693981 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693989 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="45722898-287e-4a8e-8816-5928e178d2d7" containerName="marketplace-operator" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.693995 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.694934 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.697914 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.711009 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f82c33-6a50-480c-b780-e95a09a3e064" path="/var/lib/kubelet/pods/08f82c33-6a50-480c-b780-e95a09a3e064/volumes" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.711806 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45722898-287e-4a8e-8816-5928e178d2d7" path="/var/lib/kubelet/pods/45722898-287e-4a8e-8816-5928e178d2d7/volumes" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.712350 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7" path="/var/lib/kubelet/pods/764c7cbd-8003-4afb-84ab-bf0dc3a1ebc7/volumes" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.713585 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabb6f4a-05fd-4f4f-9211-81884fdd4bb1" path="/var/lib/kubelet/pods/aabb6f4a-05fd-4f4f-9211-81884fdd4bb1/volumes" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.714158 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc370a5-e41c-4eb0-8b79-44a3570cc5a8" path="/var/lib/kubelet/pods/efc370a5-e41c-4eb0-8b79-44a3570cc5a8/volumes" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.714682 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfd2c"] Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.736296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp7s4\" (UniqueName: \"kubernetes.io/projected/a7bd7115-3a3e-4312-8543-2f40686cfdb0-kube-api-access-fp7s4\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.736334 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd7115-3a3e-4312-8543-2f40686cfdb0-utilities\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.736394 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd7115-3a3e-4312-8543-2f40686cfdb0-catalog-content\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.837924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd7115-3a3e-4312-8543-2f40686cfdb0-catalog-content\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.838020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp7s4\" (UniqueName: \"kubernetes.io/projected/a7bd7115-3a3e-4312-8543-2f40686cfdb0-kube-api-access-fp7s4\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.838053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd7115-3a3e-4312-8543-2f40686cfdb0-utilities\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.838676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bd7115-3a3e-4312-8543-2f40686cfdb0-utilities\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.839010 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bd7115-3a3e-4312-8543-2f40686cfdb0-catalog-content\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.865046 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp7s4\" (UniqueName: \"kubernetes.io/projected/a7bd7115-3a3e-4312-8543-2f40686cfdb0-kube-api-access-fp7s4\") pod \"redhat-marketplace-rfd2c\" (UID: \"a7bd7115-3a3e-4312-8543-2f40686cfdb0\") " pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.902997 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s8hk9"] Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.904786 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.907152 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.939578 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-catalog-content\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.939677 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rh9\" (UniqueName: \"kubernetes.io/projected/6c5c0388-6acf-443c-9db5-486defcdeacd-kube-api-access-67rh9\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.939704 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-utilities\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:14 crc kubenswrapper[4811]: I0216 21:00:14.943652 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s8hk9"] Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.036997 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.041075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rh9\" (UniqueName: \"kubernetes.io/projected/6c5c0388-6acf-443c-9db5-486defcdeacd-kube-api-access-67rh9\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.041143 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-utilities\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.041279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-catalog-content\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.041707 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-utilities\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.042011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-catalog-content\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.058076 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rh9\" (UniqueName: \"kubernetes.io/projected/6c5c0388-6acf-443c-9db5-486defcdeacd-kube-api-access-67rh9\") pod \"certified-operators-s8hk9\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.223306 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.250720 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfd2c"] Feb 16 21:00:15 crc kubenswrapper[4811]: W0216 21:00:15.269340 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7bd7115_3a3e_4312_8543_2f40686cfdb0.slice/crio-6c17ad5045239f77b08b7eed86a1992611a9b9c31a099ece79c422298fc609e2 WatchSource:0}: Error finding container 6c17ad5045239f77b08b7eed86a1992611a9b9c31a099ece79c422298fc609e2: Status 404 returned error can't find the container with id 6c17ad5045239f77b08b7eed86a1992611a9b9c31a099ece79c422298fc609e2 Feb 16 21:00:15 crc kubenswrapper[4811]: I0216 21:00:15.665154 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s8hk9"] Feb 16 21:00:15 crc kubenswrapper[4811]: W0216 21:00:15.670936 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c5c0388_6acf_443c_9db5_486defcdeacd.slice/crio-374b941827a4b5206a9fd1a58b222577e89cda91e65cb1f867326a1599ef6c3f WatchSource:0}: Error finding container 374b941827a4b5206a9fd1a58b222577e89cda91e65cb1f867326a1599ef6c3f: Status 404 returned error can't find the container with id 374b941827a4b5206a9fd1a58b222577e89cda91e65cb1f867326a1599ef6c3f Feb 16 21:00:16 crc kubenswrapper[4811]: I0216 21:00:16.150395 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerID="addf7e45d8451283d1051a8552ffc90e5ba53ea4ad6ac28637e763d06b8f4995" exitCode=0 Feb 16 21:00:16 crc kubenswrapper[4811]: I0216 21:00:16.150512 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerDied","Data":"addf7e45d8451283d1051a8552ffc90e5ba53ea4ad6ac28637e763d06b8f4995"} Feb 16 21:00:16 crc kubenswrapper[4811]: I0216 21:00:16.150932 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerStarted","Data":"374b941827a4b5206a9fd1a58b222577e89cda91e65cb1f867326a1599ef6c3f"} Feb 16 21:00:16 crc kubenswrapper[4811]: I0216 21:00:16.152601 4811 generic.go:334] "Generic (PLEG): container finished" podID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" containerID="03809f04bb123935b27632a5d57eccdb3bb619f5e2668c51d35f60b854c98111" exitCode=0 Feb 16 21:00:16 crc kubenswrapper[4811]: I0216 21:00:16.154048 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfd2c" event={"ID":"a7bd7115-3a3e-4312-8543-2f40686cfdb0","Type":"ContainerDied","Data":"03809f04bb123935b27632a5d57eccdb3bb619f5e2668c51d35f60b854c98111"} Feb 16 21:00:16 crc kubenswrapper[4811]: I0216 21:00:16.154074 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfd2c" event={"ID":"a7bd7115-3a3e-4312-8543-2f40686cfdb0","Type":"ContainerStarted","Data":"6c17ad5045239f77b08b7eed86a1992611a9b9c31a099ece79c422298fc609e2"} Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.095232 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rrgb4"] Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.096301 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.098971 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.122943 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrgb4"] Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.159056 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerStarted","Data":"eeb95fd63d07343de9a89cc7212a8b33d0bad90532c928e081f248fb7a360aa0"} Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.160813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfd2c" event={"ID":"a7bd7115-3a3e-4312-8543-2f40686cfdb0","Type":"ContainerStarted","Data":"55f835b41cc02dcf2bbaf620759e9ca9c64a4faeb7692c79136faebe062596c3"} Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.174710 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c21d535-a947-4399-ac26-4d5bcd1ef31f-catalog-content\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.174772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c21d535-a947-4399-ac26-4d5bcd1ef31f-utilities\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.174817 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwx9b\" (UniqueName: \"kubernetes.io/projected/6c21d535-a947-4399-ac26-4d5bcd1ef31f-kube-api-access-mwx9b\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.277015 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c21d535-a947-4399-ac26-4d5bcd1ef31f-catalog-content\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.280575 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c21d535-a947-4399-ac26-4d5bcd1ef31f-catalog-content\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.280688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c21d535-a947-4399-ac26-4d5bcd1ef31f-utilities\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.280736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwx9b\" (UniqueName: \"kubernetes.io/projected/6c21d535-a947-4399-ac26-4d5bcd1ef31f-kube-api-access-mwx9b\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.280951 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c21d535-a947-4399-ac26-4d5bcd1ef31f-utilities\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.295225 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fwbbq"] Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.297056 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.299242 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.306068 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwx9b\" (UniqueName: \"kubernetes.io/projected/6c21d535-a947-4399-ac26-4d5bcd1ef31f-kube-api-access-mwx9b\") pod \"community-operators-rrgb4\" (UID: \"6c21d535-a947-4399-ac26-4d5bcd1ef31f\") " pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.311713 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwbbq"] Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.385504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-utilities\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.385590 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-catalog-content\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.385620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jwd\" (UniqueName: \"kubernetes.io/projected/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-kube-api-access-67jwd\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.414647 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.487567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-utilities\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.487786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-catalog-content\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.487848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jwd\" (UniqueName: \"kubernetes.io/projected/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-kube-api-access-67jwd\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.489327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-utilities\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.489837 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-catalog-content\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.513477 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jwd\" (UniqueName: \"kubernetes.io/projected/a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5-kube-api-access-67jwd\") pod \"redhat-operators-fwbbq\" (UID: \"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5\") " pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.662870 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.835215 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrgb4"] Feb 16 21:00:17 crc kubenswrapper[4811]: W0216 21:00:17.838617 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c21d535_a947_4399_ac26_4d5bcd1ef31f.slice/crio-9029059eb1fd15f8f2a261decf0760d86606e7e4d5a69d0ca0d496dd0c5879d9 WatchSource:0}: Error finding container 9029059eb1fd15f8f2a261decf0760d86606e7e4d5a69d0ca0d496dd0c5879d9: Status 404 returned error can't find the container with id 9029059eb1fd15f8f2a261decf0760d86606e7e4d5a69d0ca0d496dd0c5879d9 Feb 16 21:00:17 crc kubenswrapper[4811]: I0216 21:00:17.874468 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwbbq"] Feb 16 21:00:17 crc kubenswrapper[4811]: W0216 21:00:17.886416 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7acaaf0_b18d_4e9b_b9fa_d1c384c879a5.slice/crio-b810d5ad487b6530c54a5d280a2633c05f2559a83578113a5eea4e9bc3f4b62d WatchSource:0}: Error finding container b810d5ad487b6530c54a5d280a2633c05f2559a83578113a5eea4e9bc3f4b62d: Status 404 returned error can't find the container with id b810d5ad487b6530c54a5d280a2633c05f2559a83578113a5eea4e9bc3f4b62d Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.169185 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerID="eeb95fd63d07343de9a89cc7212a8b33d0bad90532c928e081f248fb7a360aa0" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.169289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerDied","Data":"eeb95fd63d07343de9a89cc7212a8b33d0bad90532c928e081f248fb7a360aa0"} Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.173570 4811 generic.go:334] "Generic (PLEG): container finished" podID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" containerID="55f835b41cc02dcf2bbaf620759e9ca9c64a4faeb7692c79136faebe062596c3" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.173705 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfd2c" event={"ID":"a7bd7115-3a3e-4312-8543-2f40686cfdb0","Type":"ContainerDied","Data":"55f835b41cc02dcf2bbaf620759e9ca9c64a4faeb7692c79136faebe062596c3"} Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.176412 4811 generic.go:334] "Generic (PLEG): container finished" podID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" containerID="4a2a38b2d222704aa0276757e0576437e64eef8bd7b2664ad0ce3c718161ff07" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.176506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwbbq" event={"ID":"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5","Type":"ContainerDied","Data":"4a2a38b2d222704aa0276757e0576437e64eef8bd7b2664ad0ce3c718161ff07"} Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.176562 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwbbq" event={"ID":"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5","Type":"ContainerStarted","Data":"b810d5ad487b6530c54a5d280a2633c05f2559a83578113a5eea4e9bc3f4b62d"} Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.179251 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" containerID="09b3e806f4a3721f815bf5fb9e2a2c80c19a350e23ca843cffd71324b21433c3" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.179303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrgb4" event={"ID":"6c21d535-a947-4399-ac26-4d5bcd1ef31f","Type":"ContainerDied","Data":"09b3e806f4a3721f815bf5fb9e2a2c80c19a350e23ca843cffd71324b21433c3"} Feb 16 21:00:18 crc kubenswrapper[4811]: I0216 21:00:18.179338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrgb4" event={"ID":"6c21d535-a947-4399-ac26-4d5bcd1ef31f","Type":"ContainerStarted","Data":"9029059eb1fd15f8f2a261decf0760d86606e7e4d5a69d0ca0d496dd0c5879d9"} Feb 16 21:00:19 crc kubenswrapper[4811]: I0216 21:00:19.189858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrgb4" event={"ID":"6c21d535-a947-4399-ac26-4d5bcd1ef31f","Type":"ContainerStarted","Data":"9ecc9d2edd63088b5aa00ba7f529104e781d1cfc2006e14495a0676d180d6aa2"} Feb 16 21:00:19 crc kubenswrapper[4811]: I0216 21:00:19.194045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerStarted","Data":"97b41f6e05256b35e8a212c24d609dd7050d44035be3cdfc3bf6f70866dc16f8"} Feb 16 21:00:19 crc kubenswrapper[4811]: I0216 21:00:19.212344 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfd2c" event={"ID":"a7bd7115-3a3e-4312-8543-2f40686cfdb0","Type":"ContainerStarted","Data":"2b01783c4fd89d2f28f340858df16714071d1700bb10727c4463033ab74050cf"} Feb 16 21:00:19 crc kubenswrapper[4811]: I0216 21:00:19.248698 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s8hk9" podStartSLOduration=2.788205284 podStartE2EDuration="5.24867229s" podCreationTimestamp="2026-02-16 21:00:14 +0000 UTC" firstStartedPulling="2026-02-16 21:00:16.154151167 +0000 UTC m=+234.083447105" lastFinishedPulling="2026-02-16 21:00:18.614618173 +0000 UTC m=+236.543914111" observedRunningTime="2026-02-16 21:00:19.246808703 +0000 UTC m=+237.176104661" watchObservedRunningTime="2026-02-16 21:00:19.24867229 +0000 UTC m=+237.177968228" Feb 16 21:00:19 crc kubenswrapper[4811]: I0216 21:00:19.270635 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rfd2c" podStartSLOduration=2.782266884 podStartE2EDuration="5.270613893s" podCreationTimestamp="2026-02-16 21:00:14 +0000 UTC" firstStartedPulling="2026-02-16 21:00:16.155122651 +0000 UTC m=+234.084418589" lastFinishedPulling="2026-02-16 21:00:18.64346966 +0000 UTC m=+236.572765598" observedRunningTime="2026-02-16 21:00:19.270362167 +0000 UTC m=+237.199658125" watchObservedRunningTime="2026-02-16 21:00:19.270613893 +0000 UTC m=+237.199909831" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.222534 4811 generic.go:334] "Generic (PLEG): container finished" podID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" containerID="aa9754446f712b005280e82312bbf46d7d7c688b6d52cad8d198b40ff9c3973e" exitCode=0 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.222584 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwbbq" event={"ID":"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5","Type":"ContainerDied","Data":"aa9754446f712b005280e82312bbf46d7d7c688b6d52cad8d198b40ff9c3973e"} Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.227222 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" containerID="9ecc9d2edd63088b5aa00ba7f529104e781d1cfc2006e14495a0676d180d6aa2" exitCode=0 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.227402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrgb4" event={"ID":"6c21d535-a947-4399-ac26-4d5bcd1ef31f","Type":"ContainerDied","Data":"9ecc9d2edd63088b5aa00ba7f529104e781d1cfc2006e14495a0676d180d6aa2"} Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.486349 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.486814 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.486900 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.486966 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.486916 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.486940 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488252 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488527 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488548 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488559 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488567 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488578 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488584 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488596 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488602 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488607 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488614 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488621 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488627 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.488642 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488650 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488745 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488757 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488765 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488777 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488784 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.488790 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.492149 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.493596 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.499886 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.532242 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538855 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538880 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538929 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538953 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.538978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: E0216 21:00:20.616592 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-rrgb4.1894d5cad614ec2b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-rrgb4,UID:6c21d535-a947-4399-ac26-4d5bcd1ef31f,APIVersion:v1,ResourceVersion:29698,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 385ms (385ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:00:20.615253035 +0000 UTC m=+238.544548973,LastTimestamp:2026-02-16 21:00:20.615253035 +0000 UTC m=+238.544548973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.640728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.640786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.640799 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.640845 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.640868 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.640823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.641037 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.641073 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.641138 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.641182 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.641242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.641984 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.642020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.642035 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.642036 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.642164 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: I0216 21:00:20.828348 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4811]: W0216 21:00:20.855395 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b30f440e588c4b406d00dffc4568c923b0345f4cffd4ff93a7e46692a779ea5d WatchSource:0}: Error finding container b30f440e588c4b406d00dffc4568c923b0345f4cffd4ff93a7e46692a779ea5d: Status 404 returned error can't find the container with id b30f440e588c4b406d00dffc4568c923b0345f4cffd4ff93a7e46692a779ea5d Feb 16 21:00:21 crc kubenswrapper[4811]: E0216 21:00:21.233323 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-rrgb4.1894d5cad614ec2b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-rrgb4,UID:6c21d535-a947-4399-ac26-4d5bcd1ef31f,APIVersion:v1,ResourceVersion:29698,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 385ms (385ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:00:20.615253035 +0000 UTC m=+238.544548973,LastTimestamp:2026-02-16 21:00:20.615253035 +0000 UTC m=+238.544548973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.236804 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.238310 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.239410 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.239438 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.239446 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.239454 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be" exitCode=2 Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.239520 4811 scope.go:117] "RemoveContainer" containerID="fdd5ed16c06c8912a0942cb30a602d49dec9a8557c720ae1b562d09519d64a80" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.241638 4811 generic.go:334] "Generic (PLEG): container finished" podID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" containerID="f0e2491c65691cc4d7eed3544381bb9242e0b5fe408400627c6506bfc34042cb" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.241708 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89","Type":"ContainerDied","Data":"f0e2491c65691cc4d7eed3544381bb9242e0b5fe408400627c6506bfc34042cb"} Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.247995 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.248429 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.249095 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39"} Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.249131 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b30f440e588c4b406d00dffc4568c923b0345f4cffd4ff93a7e46692a779ea5d"} Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.249803 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.250136 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.251918 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwbbq" event={"ID":"a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5","Type":"ContainerStarted","Data":"3d00ec16298bd1b23f4562ef5a248528909e39227e1d342053d526f76e3008a0"} Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.252913 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.253216 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.253577 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.255374 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrgb4" event={"ID":"6c21d535-a947-4399-ac26-4d5bcd1ef31f","Type":"ContainerStarted","Data":"4cf93ecaa51935766fb84b0a75aca3a1901ba64ff97fa96e1b453bd52aeb7069"} Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.255843 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.256077 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.256361 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4811]: I0216 21:00:21.256694 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.266377 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.448296 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.449167 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.454991 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.455576 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.455781 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.455806 4811 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.455981 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.532266 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.532850 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.533019 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.533390 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.533944 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: E0216 21:00:22.659107 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.675808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kube-api-access\") pod \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.675943 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kubelet-dir\") pod \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.676020 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-var-lock\") pod \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\" (UID: \"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89\") " Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.676251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-var-lock" (OuterVolumeSpecName: "var-lock") pod "1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" (UID: "1a78bc8b-a89b-4473-b54b-d0f31ab9ef89"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.676291 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" (UID: "1a78bc8b-a89b-4473-b54b-d0f31ab9ef89"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.676716 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.676746 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.698466 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" (UID: "1a78bc8b-a89b-4473-b54b-d0f31ab9ef89"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.706038 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.706262 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.706429 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.706587 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4811]: I0216 21:00:22.780382 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a78bc8b-a89b-4473-b54b-d0f31ab9ef89-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:23 crc kubenswrapper[4811]: E0216 21:00:23.060108 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Feb 16 21:00:23 crc kubenswrapper[4811]: I0216 21:00:23.275782 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:23 crc kubenswrapper[4811]: I0216 21:00:23.276680 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30" exitCode=0 Feb 16 21:00:23 crc kubenswrapper[4811]: I0216 21:00:23.276761 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cdbf575ec4347c11f21e6c7780ec29c4eed7537b6b6b3660792e8338bdb2394" Feb 16 21:00:23 crc kubenswrapper[4811]: I0216 21:00:23.277980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1a78bc8b-a89b-4473-b54b-d0f31ab9ef89","Type":"ContainerDied","Data":"516a99aa9c978559103a3701611ffab29f65c9166aa74e618d0be49b611386ec"} Feb 16 21:00:23 crc kubenswrapper[4811]: I0216 21:00:23.278022 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516a99aa9c978559103a3701611ffab29f65c9166aa74e618d0be49b611386ec" Feb 16 21:00:23 crc kubenswrapper[4811]: I0216 21:00:23.278025 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:00:23 crc kubenswrapper[4811]: E0216 21:00:23.861623 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.091537 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.092346 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.092654 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.092928 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.094418 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.095947 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.096556 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.096958 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.097352 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.098517 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.098789 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.198417 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.198636 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.198659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.199018 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.199060 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.199081 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.283314 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.298885 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.299721 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.299742 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.299751 4811 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.299874 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.300026 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.300169 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.300418 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4811]: I0216 21:00:24.712441 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.037391 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.037782 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.088125 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.088739 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.089101 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.089382 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.089797 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.090222 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.223795 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.224659 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.275710 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.277359 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.278056 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.278339 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.278537 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.278742 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.278921 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.330263 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rfd2c" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.330941 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.331403 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.331645 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.332099 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.332461 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.332734 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.343049 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.343572 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.343863 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.344074 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.344245 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.344472 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: I0216 21:00:25.344742 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4811]: E0216 21:00:25.462441 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.415280 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.415633 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.464982 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.466098 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.466835 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.467510 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.467985 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.468444 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.468836 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.663662 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.663808 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.730865 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.732562 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.733259 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.736549 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.737885 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.738558 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4811]: I0216 21:00:27.739151 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.357042 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rrgb4" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.358022 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.359028 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.359560 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.360317 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.360992 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.361383 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.362981 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fwbbq" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.363654 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.364161 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.364392 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.364794 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.365242 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: I0216 21:00:28.365615 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4811]: E0216 21:00:28.663694 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="6.4s" Feb 16 21:00:31 crc kubenswrapper[4811]: E0216 21:00:31.234165 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-rrgb4.1894d5cad614ec2b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-rrgb4,UID:6c21d535-a947-4399-ac26-4d5bcd1ef31f,APIVersion:v1,ResourceVersion:29698,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 385ms (385ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:00:20.615253035 +0000 UTC m=+238.544548973,LastTimestamp:2026-02-16 21:00:20.615253035 +0000 UTC m=+238.544548973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:00:32 crc kubenswrapper[4811]: I0216 21:00:32.706126 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:32 crc kubenswrapper[4811]: I0216 21:00:32.707565 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:32 crc kubenswrapper[4811]: I0216 21:00:32.708238 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:32 crc kubenswrapper[4811]: I0216 21:00:32.708823 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:32 crc kubenswrapper[4811]: I0216 21:00:32.709367 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:32 crc kubenswrapper[4811]: I0216 21:00:32.709796 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.706471 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.707416 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.707830 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.708726 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.709487 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.709969 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.710554 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.727099 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.727144 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:34 crc kubenswrapper[4811]: E0216 21:00:34.727665 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:34 crc kubenswrapper[4811]: I0216 21:00:34.728157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:34 crc kubenswrapper[4811]: W0216 21:00:34.754988 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f810458c177e83f68df42a7c2a6ce4c451aa125d1bb1078fd8fb9045b280f5a1 WatchSource:0}: Error finding container f810458c177e83f68df42a7c2a6ce4c451aa125d1bb1078fd8fb9045b280f5a1: Status 404 returned error can't find the container with id f810458c177e83f68df42a7c2a6ce4c451aa125d1bb1078fd8fb9045b280f5a1 Feb 16 21:00:35 crc kubenswrapper[4811]: E0216 21:00:35.065729 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="7s" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.355082 4811 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fa65c41a88fbb0929fe59e1203b30978b79b77c5152e0e02187874b86e294e91" exitCode=0 Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.355145 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fa65c41a88fbb0929fe59e1203b30978b79b77c5152e0e02187874b86e294e91"} Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.355209 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f810458c177e83f68df42a7c2a6ce4c451aa125d1bb1078fd8fb9045b280f5a1"} Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.355584 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.355626 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:35 crc kubenswrapper[4811]: E0216 21:00:35.356112 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.356176 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" pod="openshift-marketplace/certified-operators-s8hk9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s8hk9\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.356712 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7bd7115-3a3e-4312-8543-2f40686cfdb0" pod="openshift-marketplace/redhat-marketplace-rfd2c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rfd2c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.357004 4811 status_manager.go:851] "Failed to get status for pod" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.357283 4811 status_manager.go:851] "Failed to get status for pod" podUID="a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5" pod="openshift-marketplace/redhat-operators-fwbbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fwbbq\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.357581 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:35 crc kubenswrapper[4811]: I0216 21:00:35.357907 4811 status_manager.go:851] "Failed to get status for pod" podUID="6c21d535-a947-4399-ac26-4d5bcd1ef31f" pod="openshift-marketplace/community-operators-rrgb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rrgb4\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.163204 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.163713 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.377474 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d6ab35e279f3b432a647b6e0e2772dda0c2471c65fdae089fb7506b98c563010"} Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.377539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1ef393160b7834f672d8632c50ea0d516957218129081f44fa306fbf71a144e9"} Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.377553 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54ea55527abd6340b64aa0e5a7b400d96030613f844f9404a10240dcde08f9f0"} Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.400114 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.400179 4811 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc" exitCode=1 Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.400250 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc"} Feb 16 21:00:36 crc kubenswrapper[4811]: I0216 21:00:36.403575 4811 scope.go:117] "RemoveContainer" containerID="98692e4233bdb7b0e36c66190ce22775da15c36966c5756cc0bf01ac86e8a5dc" Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.410177 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.410297 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb14d1e274f36930bc24cef40bc54cf15bf961e77342da8e0aadcfadf53f60fe"} Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.415318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e686d401802d278e4b37da68b9c2939ec792836f0073c16d599603bbaf39364"} Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.415383 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc490f14b5306f36a59993560dd12ac44aa905bb6e272b1257b3350365c8c0a7"} Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.415538 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.415664 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:37 crc kubenswrapper[4811]: I0216 21:00:37.415697 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:39 crc kubenswrapper[4811]: I0216 21:00:39.728976 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:39 crc kubenswrapper[4811]: I0216 21:00:39.729771 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:39 crc kubenswrapper[4811]: I0216 21:00:39.737416 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:42 crc kubenswrapper[4811]: I0216 21:00:42.428945 4811 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:42 crc kubenswrapper[4811]: I0216 21:00:42.724375 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="07ee7f8e-f5f7-495b-aeea-08a712f1cd7b" Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.460347 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.463856 4811 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1e686d401802d278e4b37da68b9c2939ec792836f0073c16d599603bbaf39364" exitCode=255 Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.463930 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1e686d401802d278e4b37da68b9c2939ec792836f0073c16d599603bbaf39364"} Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.465051 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.465138 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.469047 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="07ee7f8e-f5f7-495b-aeea-08a712f1cd7b" Feb 16 21:00:43 crc kubenswrapper[4811]: I0216 21:00:43.471090 4811 scope.go:117] "RemoveContainer" containerID="1e686d401802d278e4b37da68b9c2939ec792836f0073c16d599603bbaf39364" Feb 16 21:00:44 crc kubenswrapper[4811]: I0216 21:00:44.477772 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 21:00:44 crc kubenswrapper[4811]: I0216 21:00:44.480762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dc67cf03c9e0445d26f58a9bd984899d866f37be1fad1cb5058931e7f5825487"} Feb 16 21:00:44 crc kubenswrapper[4811]: I0216 21:00:44.481091 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:44 crc kubenswrapper[4811]: I0216 21:00:44.481294 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:44 crc kubenswrapper[4811]: I0216 21:00:44.481342 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:44 crc kubenswrapper[4811]: I0216 21:00:44.486506 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="07ee7f8e-f5f7-495b-aeea-08a712f1cd7b" Feb 16 21:00:45 crc kubenswrapper[4811]: I0216 21:00:45.301000 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:45 crc kubenswrapper[4811]: I0216 21:00:45.489743 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:45 crc kubenswrapper[4811]: I0216 21:00:45.489798 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:45 crc kubenswrapper[4811]: I0216 21:00:45.494082 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="07ee7f8e-f5f7-495b-aeea-08a712f1cd7b" Feb 16 21:00:45 crc kubenswrapper[4811]: I0216 21:00:45.497418 4811 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://54ea55527abd6340b64aa0e5a7b400d96030613f844f9404a10240dcde08f9f0" Feb 16 21:00:45 crc kubenswrapper[4811]: I0216 21:00:45.497461 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:46 crc kubenswrapper[4811]: I0216 21:00:46.151739 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:46 crc kubenswrapper[4811]: I0216 21:00:46.158654 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:46 crc kubenswrapper[4811]: I0216 21:00:46.498771 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:46 crc kubenswrapper[4811]: I0216 21:00:46.498832 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c014a2e2-6a69-47fc-b547-4dc52873a43e" Feb 16 21:00:46 crc kubenswrapper[4811]: I0216 21:00:46.504334 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="07ee7f8e-f5f7-495b-aeea-08a712f1cd7b" Feb 16 21:00:46 crc kubenswrapper[4811]: I0216 21:00:46.505685 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:51 crc kubenswrapper[4811]: I0216 21:00:51.409324 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 21:00:51 crc kubenswrapper[4811]: I0216 21:00:51.997380 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 21:00:52 crc kubenswrapper[4811]: I0216 21:00:52.249234 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 21:00:52 crc kubenswrapper[4811]: I0216 21:00:52.342785 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 21:00:52 crc kubenswrapper[4811]: I0216 21:00:52.757035 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:00:52 crc kubenswrapper[4811]: I0216 21:00:52.996541 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.142362 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.175276 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.235123 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.331347 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.560762 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.604762 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.701356 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.777342 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.933130 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:00:53 crc kubenswrapper[4811]: I0216 21:00:53.993955 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.041729 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.176518 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.262852 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.299413 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.362898 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.449267 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.518046 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.526259 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.640756 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.725589 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.744036 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.831126 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.850049 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.856170 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.906807 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.963096 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4811]: I0216 21:00:54.977170 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.090592 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.222706 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.241642 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.404869 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.493999 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.594129 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.685459 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.708393 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.754651 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.815801 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.847015 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.858637 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.898763 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.930102 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.970257 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 21:00:55 crc kubenswrapper[4811]: I0216 21:00:55.998330 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.195190 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.221179 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.326875 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.396702 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.438276 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.506089 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.575642 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.673609 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.687562 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.850309 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4811]: I0216 21:00:56.946256 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.003136 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.049900 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.147016 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.191782 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.320120 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.338722 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.442913 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.458071 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.461512 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.527342 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.584037 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.642361 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.669921 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.674857 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.846874 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.924944 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 21:00:57 crc kubenswrapper[4811]: I0216 21:00:57.995495 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.126809 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.248515 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.265309 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.402308 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.415546 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.659365 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.694175 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.696956 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.793022 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.900913 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.948658 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 21:00:58 crc kubenswrapper[4811]: I0216 21:00:58.983923 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.021742 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.066976 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.094270 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.143660 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.188869 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.367466 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.445510 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.487151 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.499108 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.572631 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.580438 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.616597 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.648917 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.672408 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.678445 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.696170 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.710297 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.745530 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.756842 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.856534 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:00:59 crc kubenswrapper[4811]: I0216 21:00:59.953844 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.021590 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.035357 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.093558 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.167547 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.244274 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.244412 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.260656 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.260663 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.309983 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.312215 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.371849 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.387794 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.395571 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.400949 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.473429 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.474689 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.537562 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.563021 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.610761 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.680387 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.724764 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.726362 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.790450 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.907429 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.912541 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.960253 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 21:01:00 crc kubenswrapper[4811]: I0216 21:01:00.980536 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.008415 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.040533 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.040580 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.095573 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.097705 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.110240 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.136914 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.210046 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.244790 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.252337 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.297226 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.327287 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.372454 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.598495 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.615405 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.618933 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.622648 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fwbbq" podStartSLOduration=42.152421158 podStartE2EDuration="44.622616589s" podCreationTimestamp="2026-02-16 21:00:17 +0000 UTC" firstStartedPulling="2026-02-16 21:00:18.177933833 +0000 UTC m=+236.107229771" lastFinishedPulling="2026-02-16 21:00:20.648129264 +0000 UTC m=+238.577425202" observedRunningTime="2026-02-16 21:00:42.203003255 +0000 UTC m=+260.132299193" watchObservedRunningTime="2026-02-16 21:01:01.622616589 +0000 UTC m=+279.551912527" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.623009 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rrgb4" podStartSLOduration=42.190920287 podStartE2EDuration="44.623002308s" podCreationTimestamp="2026-02-16 21:00:17 +0000 UTC" firstStartedPulling="2026-02-16 21:00:18.183152144 +0000 UTC m=+236.112448082" lastFinishedPulling="2026-02-16 21:00:20.615234165 +0000 UTC m=+238.544530103" observedRunningTime="2026-02-16 21:00:42.228873897 +0000 UTC m=+260.158169835" watchObservedRunningTime="2026-02-16 21:01:01.623002308 +0000 UTC m=+279.552298246" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.624871 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.624859174 podStartE2EDuration="41.624859174s" podCreationTimestamp="2026-02-16 21:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:00:42.21274732 +0000 UTC m=+260.142043258" watchObservedRunningTime="2026-02-16 21:01:01.624859174 +0000 UTC m=+279.554155132" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.625641 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.625713 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.630342 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.649594 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.649568457 podStartE2EDuration="19.649568457s" podCreationTimestamp="2026-02-16 21:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:01:01.644398118 +0000 UTC m=+279.573694076" watchObservedRunningTime="2026-02-16 21:01:01.649568457 +0000 UTC m=+279.578864395" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.658232 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.707056 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.818996 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.821186 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.846710 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.881716 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 21:01:01 crc kubenswrapper[4811]: I0216 21:01:01.894850 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.021770 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.035876 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.036557 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.083760 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.092543 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.151222 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.263332 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.273849 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.337384 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.341285 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.370117 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.517211 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.534739 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.535355 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.558339 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.658438 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.775526 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.869773 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.921664 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4811]: I0216 21:01:02.963697 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.005899 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.074512 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.194263 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.208762 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.237559 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.317734 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.318776 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.329241 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.345941 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.415533 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.417951 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.427158 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.433600 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.478836 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.501301 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.604845 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.699542 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.765989 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.772006 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 21:01:03 crc kubenswrapper[4811]: I0216 21:01:03.978897 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.009245 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.147150 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.183280 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.221792 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.278872 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.338177 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.437443 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.461601 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.478840 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.483065 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.511894 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.573432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.579171 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.589923 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.622591 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.715377 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.772637 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.793362 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.793700 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39" gracePeriod=5 Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.828501 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.846064 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4811]: I0216 21:01:04.939107 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.070456 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.180647 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.377332 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.404231 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.715322 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.722843 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.731059 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.755872 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.878880 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 21:01:05 crc kubenswrapper[4811]: I0216 21:01:05.993661 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.054635 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.120596 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.385896 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.554403 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.677387 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.748704 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.753667 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.937448 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:01:06 crc kubenswrapper[4811]: I0216 21:01:06.994178 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4811]: I0216 21:01:07.133255 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4811]: I0216 21:01:07.351396 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 21:01:07 crc kubenswrapper[4811]: I0216 21:01:07.465554 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 21:01:07 crc kubenswrapper[4811]: I0216 21:01:07.534728 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4811]: I0216 21:01:07.807091 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 21:01:08 crc kubenswrapper[4811]: I0216 21:01:08.078828 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 21:01:08 crc kubenswrapper[4811]: I0216 21:01:08.333150 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 21:01:08 crc kubenswrapper[4811]: I0216 21:01:08.460600 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 21:01:08 crc kubenswrapper[4811]: I0216 21:01:08.497758 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 21:01:09 crc kubenswrapper[4811]: I0216 21:01:09.158146 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 21:01:09 crc kubenswrapper[4811]: I0216 21:01:09.575302 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 21:01:09 crc kubenswrapper[4811]: I0216 21:01:09.579940 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.402639 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.403334 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.543512 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.543669 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.543715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.543838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.543873 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.543927 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544089 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544089 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544273 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544752 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544794 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544817 4811 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.544837 4811 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.556239 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.646262 4811 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.672443 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.672504 4811 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39" exitCode=137 Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.672568 4811 scope.go:117] "RemoveContainer" containerID="e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.672606 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.693707 4811 scope.go:117] "RemoveContainer" containerID="e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39" Feb 16 21:01:10 crc kubenswrapper[4811]: E0216 21:01:10.694253 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39\": container with ID starting with e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39 not found: ID does not exist" containerID="e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.694321 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39"} err="failed to get container status \"e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39\": rpc error: code = NotFound desc = could not find container \"e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39\": container with ID starting with e35afac064215ad90411f6bf91d64a0fe491c93b8ff5bb0d6e654a47214b0d39 not found: ID does not exist" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.713920 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.714231 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.726441 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.726493 4811 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="79187781-c35b-4460-8145-00fd0740d527" Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.729649 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:01:10 crc kubenswrapper[4811]: I0216 21:01:10.729695 4811 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="79187781-c35b-4460-8145-00fd0740d527" Feb 16 21:01:22 crc kubenswrapper[4811]: I0216 21:01:22.509766 4811 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 21:01:26 crc kubenswrapper[4811]: I0216 21:01:26.565063 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.423369 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hxljc"] Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.424874 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" podUID="a956e785-7e90-41d8-97ea-d89664b3719a" containerName="controller-manager" containerID="cri-o://e0b38235ea1ec1141011fb74bbfd0d03028ce1f5eb8bc51237beae787a68a567" gracePeriod=30 Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.435961 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd"] Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.436315 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" podUID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" containerName="route-controller-manager" containerID="cri-o://16cc6369d95929998f9e7c7b260446afd8ba86598d733e45d2ba266d9fb63c17" gracePeriod=30 Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.819550 4811 generic.go:334] "Generic (PLEG): container finished" podID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" containerID="16cc6369d95929998f9e7c7b260446afd8ba86598d733e45d2ba266d9fb63c17" exitCode=0 Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.819627 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" event={"ID":"e17f4635-2bd6-4ad1-b337-63c0e87ac247","Type":"ContainerDied","Data":"16cc6369d95929998f9e7c7b260446afd8ba86598d733e45d2ba266d9fb63c17"} Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.821500 4811 generic.go:334] "Generic (PLEG): container finished" podID="a956e785-7e90-41d8-97ea-d89664b3719a" containerID="e0b38235ea1ec1141011fb74bbfd0d03028ce1f5eb8bc51237beae787a68a567" exitCode=0 Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.821542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" event={"ID":"a956e785-7e90-41d8-97ea-d89664b3719a","Type":"ContainerDied","Data":"e0b38235ea1ec1141011fb74bbfd0d03028ce1f5eb8bc51237beae787a68a567"} Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.862684 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.870605 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.975383 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e17f4635-2bd6-4ad1-b337-63c0e87ac247-serving-cert\") pod \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.975816 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-config\") pod \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.975931 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-client-ca\") pod \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976056 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvm56\" (UniqueName: \"kubernetes.io/projected/e17f4635-2bd6-4ad1-b337-63c0e87ac247-kube-api-access-nvm56\") pod \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\" (UID: \"e17f4635-2bd6-4ad1-b337-63c0e87ac247\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976615 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a956e785-7e90-41d8-97ea-d89664b3719a-serving-cert\") pod \"a956e785-7e90-41d8-97ea-d89664b3719a\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976388 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-client-ca" (OuterVolumeSpecName: "client-ca") pod "e17f4635-2bd6-4ad1-b337-63c0e87ac247" (UID: "e17f4635-2bd6-4ad1-b337-63c0e87ac247"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-config\") pod \"a956e785-7e90-41d8-97ea-d89664b3719a\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976883 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-client-ca\") pod \"a956e785-7e90-41d8-97ea-d89664b3719a\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976931 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5zb6\" (UniqueName: \"kubernetes.io/projected/a956e785-7e90-41d8-97ea-d89664b3719a-kube-api-access-v5zb6\") pod \"a956e785-7e90-41d8-97ea-d89664b3719a\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.976955 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-proxy-ca-bundles\") pod \"a956e785-7e90-41d8-97ea-d89664b3719a\" (UID: \"a956e785-7e90-41d8-97ea-d89664b3719a\") " Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.977621 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.977746 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a956e785-7e90-41d8-97ea-d89664b3719a" (UID: "a956e785-7e90-41d8-97ea-d89664b3719a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.977795 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a956e785-7e90-41d8-97ea-d89664b3719a" (UID: "a956e785-7e90-41d8-97ea-d89664b3719a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.978048 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-config" (OuterVolumeSpecName: "config") pod "a956e785-7e90-41d8-97ea-d89664b3719a" (UID: "a956e785-7e90-41d8-97ea-d89664b3719a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.978092 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-config" (OuterVolumeSpecName: "config") pod "e17f4635-2bd6-4ad1-b337-63c0e87ac247" (UID: "e17f4635-2bd6-4ad1-b337-63c0e87ac247"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.981807 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a956e785-7e90-41d8-97ea-d89664b3719a-kube-api-access-v5zb6" (OuterVolumeSpecName: "kube-api-access-v5zb6") pod "a956e785-7e90-41d8-97ea-d89664b3719a" (UID: "a956e785-7e90-41d8-97ea-d89664b3719a"). InnerVolumeSpecName "kube-api-access-v5zb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.982031 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17f4635-2bd6-4ad1-b337-63c0e87ac247-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e17f4635-2bd6-4ad1-b337-63c0e87ac247" (UID: "e17f4635-2bd6-4ad1-b337-63c0e87ac247"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.982378 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956e785-7e90-41d8-97ea-d89664b3719a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a956e785-7e90-41d8-97ea-d89664b3719a" (UID: "a956e785-7e90-41d8-97ea-d89664b3719a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:01:30 crc kubenswrapper[4811]: I0216 21:01:30.982630 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17f4635-2bd6-4ad1-b337-63c0e87ac247-kube-api-access-nvm56" (OuterVolumeSpecName: "kube-api-access-nvm56") pod "e17f4635-2bd6-4ad1-b337-63c0e87ac247" (UID: "e17f4635-2bd6-4ad1-b337-63c0e87ac247"). InnerVolumeSpecName "kube-api-access-nvm56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078643 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e17f4635-2bd6-4ad1-b337-63c0e87ac247-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078717 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvm56\" (UniqueName: \"kubernetes.io/projected/e17f4635-2bd6-4ad1-b337-63c0e87ac247-kube-api-access-nvm56\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078738 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a956e785-7e90-41d8-97ea-d89664b3719a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078755 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078767 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078778 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5zb6\" (UniqueName: \"kubernetes.io/projected/a956e785-7e90-41d8-97ea-d89664b3719a-kube-api-access-v5zb6\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078788 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a956e785-7e90-41d8-97ea-d89664b3719a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.078799 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e17f4635-2bd6-4ad1-b337-63c0e87ac247-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.768729 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dd8f4957d-klzfl"] Feb 16 21:01:31 crc kubenswrapper[4811]: E0216 21:01:31.769398 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956e785-7e90-41d8-97ea-d89664b3719a" containerName="controller-manager" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769417 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956e785-7e90-41d8-97ea-d89664b3719a" containerName="controller-manager" Feb 16 21:01:31 crc kubenswrapper[4811]: E0216 21:01:31.769439 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" containerName="installer" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769446 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" containerName="installer" Feb 16 21:01:31 crc kubenswrapper[4811]: E0216 21:01:31.769458 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769465 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:01:31 crc kubenswrapper[4811]: E0216 21:01:31.769477 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" containerName="route-controller-manager" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769485 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" containerName="route-controller-manager" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769588 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a956e785-7e90-41d8-97ea-d89664b3719a" containerName="controller-manager" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769602 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" containerName="route-controller-manager" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769614 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.769623 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a78bc8b-a89b-4473-b54b-d0f31ab9ef89" containerName="installer" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.770122 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.785507 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd8f4957d-klzfl"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.790066 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.790789 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.808789 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815289 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19614070-a53f-4f4c-9d3f-63d424b05b8c-serving-cert\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815338 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-proxy-ca-bundles\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqfm\" (UniqueName: \"kubernetes.io/projected/19614070-a53f-4f4c-9d3f-63d424b05b8c-kube-api-access-6pqfm\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-client-ca\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815522 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wq9\" (UniqueName: \"kubernetes.io/projected/85ac6a76-264e-4810-9488-0424c2405c00-kube-api-access-57wq9\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-client-ca\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ac6a76-264e-4810-9488-0424c2405c00-serving-cert\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815788 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-config\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.815834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-config\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.829449 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.829619 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd" event={"ID":"e17f4635-2bd6-4ad1-b337-63c0e87ac247","Type":"ContainerDied","Data":"384489e297e4886d9b56ecc145374ec2aa2698f3309074a36628fe69b2e6ac08"} Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.829677 4811 scope.go:117] "RemoveContainer" containerID="16cc6369d95929998f9e7c7b260446afd8ba86598d733e45d2ba266d9fb63c17" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.831040 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" event={"ID":"a956e785-7e90-41d8-97ea-d89664b3719a","Type":"ContainerDied","Data":"82026325063e3af3d37265c255b4aaa85dd818908c6e7f08d24129658da11c87"} Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.831101 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hxljc" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.847674 4811 scope.go:117] "RemoveContainer" containerID="e0b38235ea1ec1141011fb74bbfd0d03028ce1f5eb8bc51237beae787a68a567" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.866083 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hxljc"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.871171 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hxljc"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.874601 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.878160 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zqrkd"] Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.916779 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-config\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.916862 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-config\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19614070-a53f-4f4c-9d3f-63d424b05b8c-serving-cert\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917084 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-proxy-ca-bundles\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pqfm\" (UniqueName: \"kubernetes.io/projected/19614070-a53f-4f4c-9d3f-63d424b05b8c-kube-api-access-6pqfm\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917155 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-client-ca\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917174 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57wq9\" (UniqueName: \"kubernetes.io/projected/85ac6a76-264e-4810-9488-0424c2405c00-kube-api-access-57wq9\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917256 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-client-ca\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.917285 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ac6a76-264e-4810-9488-0424c2405c00-serving-cert\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.918425 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-config\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.923387 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-client-ca\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.923914 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-client-ca\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.924385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-config\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.924743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-proxy-ca-bundles\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.926790 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ac6a76-264e-4810-9488-0424c2405c00-serving-cert\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.928662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19614070-a53f-4f4c-9d3f-63d424b05b8c-serving-cert\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.937324 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pqfm\" (UniqueName: \"kubernetes.io/projected/19614070-a53f-4f4c-9d3f-63d424b05b8c-kube-api-access-6pqfm\") pod \"route-controller-manager-965d95948-nj4cp\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:31 crc kubenswrapper[4811]: I0216 21:01:31.938577 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wq9\" (UniqueName: \"kubernetes.io/projected/85ac6a76-264e-4810-9488-0424c2405c00-kube-api-access-57wq9\") pod \"controller-manager-dd8f4957d-klzfl\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.092608 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.118848 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.302717 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd8f4957d-klzfl"] Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.348239 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp"] Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.708870 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a956e785-7e90-41d8-97ea-d89664b3719a" path="/var/lib/kubelet/pods/a956e785-7e90-41d8-97ea-d89664b3719a/volumes" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.709843 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17f4635-2bd6-4ad1-b337-63c0e87ac247" path="/var/lib/kubelet/pods/e17f4635-2bd6-4ad1-b337-63c0e87ac247/volumes" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.838313 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" event={"ID":"85ac6a76-264e-4810-9488-0424c2405c00","Type":"ContainerStarted","Data":"f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36"} Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.838386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" event={"ID":"85ac6a76-264e-4810-9488-0424c2405c00","Type":"ContainerStarted","Data":"ea6efe0c805de80538122863053bbe7160f3a671fefde2970e42c81bca97b676"} Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.838855 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.841806 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" event={"ID":"19614070-a53f-4f4c-9d3f-63d424b05b8c","Type":"ContainerStarted","Data":"99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4"} Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.841992 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" event={"ID":"19614070-a53f-4f4c-9d3f-63d424b05b8c","Type":"ContainerStarted","Data":"bc838cca36b57d59f0bb33cd58f430985e1239254be447bac2faef3dd221afb0"} Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.842085 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.846410 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:01:32 crc kubenswrapper[4811]: I0216 21:01:32.862829 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" podStartSLOduration=1.862809301 podStartE2EDuration="1.862809301s" podCreationTimestamp="2026-02-16 21:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:01:32.8615621 +0000 UTC m=+310.790858048" watchObservedRunningTime="2026-02-16 21:01:32.862809301 +0000 UTC m=+310.792105259" Feb 16 21:01:33 crc kubenswrapper[4811]: I0216 21:01:33.037782 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:33 crc kubenswrapper[4811]: I0216 21:01:33.061490 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" podStartSLOduration=2.061465095 podStartE2EDuration="2.061465095s" podCreationTimestamp="2026-02-16 21:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:01:32.915066666 +0000 UTC m=+310.844362614" watchObservedRunningTime="2026-02-16 21:01:33.061465095 +0000 UTC m=+310.990761033" Feb 16 21:01:48 crc kubenswrapper[4811]: I0216 21:01:48.364294 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:01:48 crc kubenswrapper[4811]: I0216 21:01:48.365243 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.400066 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp"] Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.400795 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" podUID="19614070-a53f-4f4c-9d3f-63d424b05b8c" containerName="route-controller-manager" containerID="cri-o://99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4" gracePeriod=30 Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.886676 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.959468 4811 generic.go:334] "Generic (PLEG): container finished" podID="19614070-a53f-4f4c-9d3f-63d424b05b8c" containerID="99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4" exitCode=0 Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.959541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" event={"ID":"19614070-a53f-4f4c-9d3f-63d424b05b8c","Type":"ContainerDied","Data":"99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4"} Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.959577 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.959628 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp" event={"ID":"19614070-a53f-4f4c-9d3f-63d424b05b8c","Type":"ContainerDied","Data":"bc838cca36b57d59f0bb33cd58f430985e1239254be447bac2faef3dd221afb0"} Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.959655 4811 scope.go:117] "RemoveContainer" containerID="99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.978860 4811 scope.go:117] "RemoveContainer" containerID="99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4" Feb 16 21:01:50 crc kubenswrapper[4811]: E0216 21:01:50.979674 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4\": container with ID starting with 99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4 not found: ID does not exist" containerID="99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.979723 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4"} err="failed to get container status \"99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4\": rpc error: code = NotFound desc = could not find container \"99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4\": container with ID starting with 99baa2f0e17133f74ff1bc4f51e1b47494d5b47c4b4a66e903426c1dac79c5f4 not found: ID does not exist" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.996547 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-client-ca\") pod \"19614070-a53f-4f4c-9d3f-63d424b05b8c\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.996633 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-config\") pod \"19614070-a53f-4f4c-9d3f-63d424b05b8c\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.996661 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pqfm\" (UniqueName: \"kubernetes.io/projected/19614070-a53f-4f4c-9d3f-63d424b05b8c-kube-api-access-6pqfm\") pod \"19614070-a53f-4f4c-9d3f-63d424b05b8c\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.996693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19614070-a53f-4f4c-9d3f-63d424b05b8c-serving-cert\") pod \"19614070-a53f-4f4c-9d3f-63d424b05b8c\" (UID: \"19614070-a53f-4f4c-9d3f-63d424b05b8c\") " Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.997718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "19614070-a53f-4f4c-9d3f-63d424b05b8c" (UID: "19614070-a53f-4f4c-9d3f-63d424b05b8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:50 crc kubenswrapper[4811]: I0216 21:01:50.997931 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-config" (OuterVolumeSpecName: "config") pod "19614070-a53f-4f4c-9d3f-63d424b05b8c" (UID: "19614070-a53f-4f4c-9d3f-63d424b05b8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.003681 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19614070-a53f-4f4c-9d3f-63d424b05b8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19614070-a53f-4f4c-9d3f-63d424b05b8c" (UID: "19614070-a53f-4f4c-9d3f-63d424b05b8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.003904 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19614070-a53f-4f4c-9d3f-63d424b05b8c-kube-api-access-6pqfm" (OuterVolumeSpecName: "kube-api-access-6pqfm") pod "19614070-a53f-4f4c-9d3f-63d424b05b8c" (UID: "19614070-a53f-4f4c-9d3f-63d424b05b8c"). InnerVolumeSpecName "kube-api-access-6pqfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.098767 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19614070-a53f-4f4c-9d3f-63d424b05b8c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.098828 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.098846 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19614070-a53f-4f4c-9d3f-63d424b05b8c-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.098866 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pqfm\" (UniqueName: \"kubernetes.io/projected/19614070-a53f-4f4c-9d3f-63d424b05b8c-kube-api-access-6pqfm\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.308722 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp"] Feb 16 21:01:51 crc kubenswrapper[4811]: I0216 21:01:51.316676 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-965d95948-nj4cp"] Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.238486 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv"] Feb 16 21:01:52 crc kubenswrapper[4811]: E0216 21:01:52.238769 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19614070-a53f-4f4c-9d3f-63d424b05b8c" containerName="route-controller-manager" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.238784 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="19614070-a53f-4f4c-9d3f-63d424b05b8c" containerName="route-controller-manager" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.238904 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="19614070-a53f-4f4c-9d3f-63d424b05b8c" containerName="route-controller-manager" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.239364 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.241083 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.241540 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.241647 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.241765 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.242029 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.243664 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.256701 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv"] Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.419031 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48996c3-a6b0-4afa-88b5-050585cbdbdf-config\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.419089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f48996c3-a6b0-4afa-88b5-050585cbdbdf-client-ca\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.419117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/f48996c3-a6b0-4afa-88b5-050585cbdbdf-kube-api-access-fjt4f\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.419161 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f48996c3-a6b0-4afa-88b5-050585cbdbdf-serving-cert\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.520313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48996c3-a6b0-4afa-88b5-050585cbdbdf-config\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.520397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f48996c3-a6b0-4afa-88b5-050585cbdbdf-client-ca\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.520434 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/f48996c3-a6b0-4afa-88b5-050585cbdbdf-kube-api-access-fjt4f\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.520498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f48996c3-a6b0-4afa-88b5-050585cbdbdf-serving-cert\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.522500 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f48996c3-a6b0-4afa-88b5-050585cbdbdf-client-ca\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.523153 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48996c3-a6b0-4afa-88b5-050585cbdbdf-config\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.526000 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f48996c3-a6b0-4afa-88b5-050585cbdbdf-serving-cert\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.561142 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/f48996c3-a6b0-4afa-88b5-050585cbdbdf-kube-api-access-fjt4f\") pod \"route-controller-manager-65864565b6-s26lv\" (UID: \"f48996c3-a6b0-4afa-88b5-050585cbdbdf\") " pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.714877 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19614070-a53f-4f4c-9d3f-63d424b05b8c" path="/var/lib/kubelet/pods/19614070-a53f-4f4c-9d3f-63d424b05b8c/volumes" Feb 16 21:01:52 crc kubenswrapper[4811]: I0216 21:01:52.854967 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:53 crc kubenswrapper[4811]: I0216 21:01:53.312671 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv"] Feb 16 21:01:53 crc kubenswrapper[4811]: W0216 21:01:53.320446 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf48996c3_a6b0_4afa_88b5_050585cbdbdf.slice/crio-4067f4fdd09b75bc90f802ab0afebb69abdfea8ae24fef53b09c44d069f49a56 WatchSource:0}: Error finding container 4067f4fdd09b75bc90f802ab0afebb69abdfea8ae24fef53b09c44d069f49a56: Status 404 returned error can't find the container with id 4067f4fdd09b75bc90f802ab0afebb69abdfea8ae24fef53b09c44d069f49a56 Feb 16 21:01:53 crc kubenswrapper[4811]: I0216 21:01:53.991014 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" event={"ID":"f48996c3-a6b0-4afa-88b5-050585cbdbdf","Type":"ContainerStarted","Data":"2e10a1a79b77ef5764f89be5e87505fdee9850e6ff88c30590817e3b404a1186"} Feb 16 21:01:53 crc kubenswrapper[4811]: I0216 21:01:53.991569 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:53 crc kubenswrapper[4811]: I0216 21:01:53.991597 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" event={"ID":"f48996c3-a6b0-4afa-88b5-050585cbdbdf","Type":"ContainerStarted","Data":"4067f4fdd09b75bc90f802ab0afebb69abdfea8ae24fef53b09c44d069f49a56"} Feb 16 21:01:53 crc kubenswrapper[4811]: I0216 21:01:53.997834 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" Feb 16 21:01:54 crc kubenswrapper[4811]: I0216 21:01:54.023251 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65864565b6-s26lv" podStartSLOduration=4.023181686 podStartE2EDuration="4.023181686s" podCreationTimestamp="2026-02-16 21:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:01:54.019545259 +0000 UTC m=+331.948841207" watchObservedRunningTime="2026-02-16 21:01:54.023181686 +0000 UTC m=+331.952477684" Feb 16 21:02:18 crc kubenswrapper[4811]: I0216 21:02:18.364809 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:02:18 crc kubenswrapper[4811]: I0216 21:02:18.365696 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.197437 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gc8f9"] Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.199168 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.217033 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gc8f9"] Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.367869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-bound-sa-token\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.368151 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-registry-tls\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.368332 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f180527c-499e-4604-a926-48be5acd406b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.368648 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26np\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-kube-api-access-s26np\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.368759 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f180527c-499e-4604-a926-48be5acd406b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.368898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.369000 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f180527c-499e-4604-a926-48be5acd406b-trusted-ca\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.369110 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f180527c-499e-4604-a926-48be5acd406b-registry-certificates\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.396134 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-bound-sa-token\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470434 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-registry-tls\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f180527c-499e-4604-a926-48be5acd406b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470481 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26np\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-kube-api-access-s26np\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470506 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f180527c-499e-4604-a926-48be5acd406b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f180527c-499e-4604-a926-48be5acd406b-trusted-ca\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.470566 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f180527c-499e-4604-a926-48be5acd406b-registry-certificates\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.471702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f180527c-499e-4604-a926-48be5acd406b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.471944 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f180527c-499e-4604-a926-48be5acd406b-registry-certificates\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.472758 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f180527c-499e-4604-a926-48be5acd406b-trusted-ca\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.481697 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f180527c-499e-4604-a926-48be5acd406b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.488301 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-registry-tls\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.493719 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26np\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-kube-api-access-s26np\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.500149 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f180527c-499e-4604-a926-48be5acd406b-bound-sa-token\") pod \"image-registry-66df7c8f76-gc8f9\" (UID: \"f180527c-499e-4604-a926-48be5acd406b\") " pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.538939 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:25 crc kubenswrapper[4811]: I0216 21:02:25.953982 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gc8f9"] Feb 16 21:02:25 crc kubenswrapper[4811]: W0216 21:02:25.965376 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf180527c_499e_4604_a926_48be5acd406b.slice/crio-99c21f61b9b8d49b23c291197d5dc482e0b8facfa5d68d5b3f1d76c113f21730 WatchSource:0}: Error finding container 99c21f61b9b8d49b23c291197d5dc482e0b8facfa5d68d5b3f1d76c113f21730: Status 404 returned error can't find the container with id 99c21f61b9b8d49b23c291197d5dc482e0b8facfa5d68d5b3f1d76c113f21730 Feb 16 21:02:26 crc kubenswrapper[4811]: I0216 21:02:26.259583 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" event={"ID":"f180527c-499e-4604-a926-48be5acd406b","Type":"ContainerStarted","Data":"3710186681d4f0d4ba99d2a52505c4ab6425f98998e6bcb7a382bbf2c2a8ac57"} Feb 16 21:02:26 crc kubenswrapper[4811]: I0216 21:02:26.259628 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" event={"ID":"f180527c-499e-4604-a926-48be5acd406b","Type":"ContainerStarted","Data":"99c21f61b9b8d49b23c291197d5dc482e0b8facfa5d68d5b3f1d76c113f21730"} Feb 16 21:02:26 crc kubenswrapper[4811]: I0216 21:02:26.259742 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:26 crc kubenswrapper[4811]: I0216 21:02:26.284246 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" podStartSLOduration=1.284225726 podStartE2EDuration="1.284225726s" podCreationTimestamp="2026-02-16 21:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:02:26.278580511 +0000 UTC m=+364.207876459" watchObservedRunningTime="2026-02-16 21:02:26.284225726 +0000 UTC m=+364.213521684" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.372068 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd8f4957d-klzfl"] Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.372828 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" podUID="85ac6a76-264e-4810-9488-0424c2405c00" containerName="controller-manager" containerID="cri-o://f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36" gracePeriod=30 Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.782393 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.793408 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ac6a76-264e-4810-9488-0424c2405c00-serving-cert\") pod \"85ac6a76-264e-4810-9488-0424c2405c00\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.793487 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57wq9\" (UniqueName: \"kubernetes.io/projected/85ac6a76-264e-4810-9488-0424c2405c00-kube-api-access-57wq9\") pod \"85ac6a76-264e-4810-9488-0424c2405c00\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.793525 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-client-ca\") pod \"85ac6a76-264e-4810-9488-0424c2405c00\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.793633 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-proxy-ca-bundles\") pod \"85ac6a76-264e-4810-9488-0424c2405c00\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.793664 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-config\") pod \"85ac6a76-264e-4810-9488-0424c2405c00\" (UID: \"85ac6a76-264e-4810-9488-0424c2405c00\") " Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.796154 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "85ac6a76-264e-4810-9488-0424c2405c00" (UID: "85ac6a76-264e-4810-9488-0424c2405c00"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.797440 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-config" (OuterVolumeSpecName: "config") pod "85ac6a76-264e-4810-9488-0424c2405c00" (UID: "85ac6a76-264e-4810-9488-0424c2405c00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.797816 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-client-ca" (OuterVolumeSpecName: "client-ca") pod "85ac6a76-264e-4810-9488-0424c2405c00" (UID: "85ac6a76-264e-4810-9488-0424c2405c00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.809408 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ac6a76-264e-4810-9488-0424c2405c00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85ac6a76-264e-4810-9488-0424c2405c00" (UID: "85ac6a76-264e-4810-9488-0424c2405c00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.809571 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ac6a76-264e-4810-9488-0424c2405c00-kube-api-access-57wq9" (OuterVolumeSpecName: "kube-api-access-57wq9") pod "85ac6a76-264e-4810-9488-0424c2405c00" (UID: "85ac6a76-264e-4810-9488-0424c2405c00"). InnerVolumeSpecName "kube-api-access-57wq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.896038 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.896079 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.896090 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ac6a76-264e-4810-9488-0424c2405c00-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.896098 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ac6a76-264e-4810-9488-0424c2405c00-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:30 crc kubenswrapper[4811]: I0216 21:02:30.896107 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57wq9\" (UniqueName: \"kubernetes.io/projected/85ac6a76-264e-4810-9488-0424c2405c00-kube-api-access-57wq9\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.293906 4811 generic.go:334] "Generic (PLEG): container finished" podID="85ac6a76-264e-4810-9488-0424c2405c00" containerID="f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36" exitCode=0 Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.293980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" event={"ID":"85ac6a76-264e-4810-9488-0424c2405c00","Type":"ContainerDied","Data":"f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36"} Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.294021 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" event={"ID":"85ac6a76-264e-4810-9488-0424c2405c00","Type":"ContainerDied","Data":"ea6efe0c805de80538122863053bbe7160f3a671fefde2970e42c81bca97b676"} Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.294050 4811 scope.go:117] "RemoveContainer" containerID="f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36" Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.293982 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd8f4957d-klzfl" Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.329493 4811 scope.go:117] "RemoveContainer" containerID="f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36" Feb 16 21:02:31 crc kubenswrapper[4811]: E0216 21:02:31.330173 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36\": container with ID starting with f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36 not found: ID does not exist" containerID="f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36" Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.330344 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36"} err="failed to get container status \"f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36\": rpc error: code = NotFound desc = could not find container \"f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36\": container with ID starting with f1b1ed22406232749aa28799542e6d515eb0ff1f36a15b0c1cfe402f25603c36 not found: ID does not exist" Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.348964 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd8f4957d-klzfl"] Feb 16 21:02:31 crc kubenswrapper[4811]: I0216 21:02:31.353548 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dd8f4957d-klzfl"] Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.272250 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-787fc78659-k7x8d"] Feb 16 21:02:32 crc kubenswrapper[4811]: E0216 21:02:32.272485 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ac6a76-264e-4810-9488-0424c2405c00" containerName="controller-manager" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.272496 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ac6a76-264e-4810-9488-0424c2405c00" containerName="controller-manager" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.272619 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ac6a76-264e-4810-9488-0424c2405c00" containerName="controller-manager" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.273076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.277595 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.277606 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.277824 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.277867 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.278841 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.281902 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.283531 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.289122 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787fc78659-k7x8d"] Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.313934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a5aedd9-22fc-44e8-9b79-919de78b92c1-serving-cert\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.314023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-proxy-ca-bundles\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.314080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-client-ca\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.314107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-config\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.314327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j56c\" (UniqueName: \"kubernetes.io/projected/2a5aedd9-22fc-44e8-9b79-919de78b92c1-kube-api-access-2j56c\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.414845 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-client-ca\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.414906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-config\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.414946 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j56c\" (UniqueName: \"kubernetes.io/projected/2a5aedd9-22fc-44e8-9b79-919de78b92c1-kube-api-access-2j56c\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.414985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a5aedd9-22fc-44e8-9b79-919de78b92c1-serving-cert\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.415046 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-proxy-ca-bundles\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.416503 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-proxy-ca-bundles\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.417067 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-client-ca\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.418108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5aedd9-22fc-44e8-9b79-919de78b92c1-config\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.426394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a5aedd9-22fc-44e8-9b79-919de78b92c1-serving-cert\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.440775 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j56c\" (UniqueName: \"kubernetes.io/projected/2a5aedd9-22fc-44e8-9b79-919de78b92c1-kube-api-access-2j56c\") pod \"controller-manager-787fc78659-k7x8d\" (UID: \"2a5aedd9-22fc-44e8-9b79-919de78b92c1\") " pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.593762 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.715953 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ac6a76-264e-4810-9488-0424c2405c00" path="/var/lib/kubelet/pods/85ac6a76-264e-4810-9488-0424c2405c00/volumes" Feb 16 21:02:32 crc kubenswrapper[4811]: I0216 21:02:32.864527 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787fc78659-k7x8d"] Feb 16 21:02:32 crc kubenswrapper[4811]: W0216 21:02:32.883832 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5aedd9_22fc_44e8_9b79_919de78b92c1.slice/crio-e53aafa9e5c65266d1b4326805c374727a1d86435047f607ad638716e8a0d0aa WatchSource:0}: Error finding container e53aafa9e5c65266d1b4326805c374727a1d86435047f607ad638716e8a0d0aa: Status 404 returned error can't find the container with id e53aafa9e5c65266d1b4326805c374727a1d86435047f607ad638716e8a0d0aa Feb 16 21:02:33 crc kubenswrapper[4811]: I0216 21:02:33.313133 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" event={"ID":"2a5aedd9-22fc-44e8-9b79-919de78b92c1","Type":"ContainerStarted","Data":"65d809c8844c405b72b9b1d163d8439fd9efc35db97e5db7ae0abe3456314b91"} Feb 16 21:02:33 crc kubenswrapper[4811]: I0216 21:02:33.313184 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" event={"ID":"2a5aedd9-22fc-44e8-9b79-919de78b92c1","Type":"ContainerStarted","Data":"e53aafa9e5c65266d1b4326805c374727a1d86435047f607ad638716e8a0d0aa"} Feb 16 21:02:33 crc kubenswrapper[4811]: I0216 21:02:33.315420 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:33 crc kubenswrapper[4811]: I0216 21:02:33.331276 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" Feb 16 21:02:33 crc kubenswrapper[4811]: I0216 21:02:33.340651 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-787fc78659-k7x8d" podStartSLOduration=3.340633673 podStartE2EDuration="3.340633673s" podCreationTimestamp="2026-02-16 21:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:02:33.338497762 +0000 UTC m=+371.267793720" watchObservedRunningTime="2026-02-16 21:02:33.340633673 +0000 UTC m=+371.269929611" Feb 16 21:02:45 crc kubenswrapper[4811]: I0216 21:02:45.545750 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gc8f9" Feb 16 21:02:45 crc kubenswrapper[4811]: I0216 21:02:45.616383 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z425"] Feb 16 21:02:48 crc kubenswrapper[4811]: I0216 21:02:48.363670 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:02:48 crc kubenswrapper[4811]: I0216 21:02:48.364031 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:02:48 crc kubenswrapper[4811]: I0216 21:02:48.364097 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:02:48 crc kubenswrapper[4811]: I0216 21:02:48.365133 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"511f95f6a6799c704fdd7e32c1371b422a6e981f14147fd4c29d440cdf6c2331"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:02:48 crc kubenswrapper[4811]: I0216 21:02:48.365289 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://511f95f6a6799c704fdd7e32c1371b422a6e981f14147fd4c29d440cdf6c2331" gracePeriod=600 Feb 16 21:02:49 crc kubenswrapper[4811]: I0216 21:02:49.418135 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="511f95f6a6799c704fdd7e32c1371b422a6e981f14147fd4c29d440cdf6c2331" exitCode=0 Feb 16 21:02:49 crc kubenswrapper[4811]: I0216 21:02:49.418273 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"511f95f6a6799c704fdd7e32c1371b422a6e981f14147fd4c29d440cdf6c2331"} Feb 16 21:02:49 crc kubenswrapper[4811]: I0216 21:02:49.418779 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"1f0a256388bab5ae3a75d81440eaebf36f0fd6fc190dadf86a4b8d117b1e9e11"} Feb 16 21:02:49 crc kubenswrapper[4811]: I0216 21:02:49.418832 4811 scope.go:117] "RemoveContainer" containerID="13fe079e9918b8d2b8813d35436a60079a470d30d7fbcbcc018b5e12688525ba" Feb 16 21:03:10 crc kubenswrapper[4811]: I0216 21:03:10.669400 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" podUID="20c76084-401b-41ca-ad08-2752d2d7132b" containerName="registry" containerID="cri-o://0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f" gracePeriod=30 Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.113299 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.248612 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20c76084-401b-41ca-ad08-2752d2d7132b-ca-trust-extracted\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.248678 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-registry-certificates\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.248880 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.248921 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-registry-tls\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.248948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20c76084-401b-41ca-ad08-2752d2d7132b-installation-pull-secrets\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.248980 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-trusted-ca\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.249004 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-bound-sa-token\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.249033 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt45r\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-kube-api-access-pt45r\") pod \"20c76084-401b-41ca-ad08-2752d2d7132b\" (UID: \"20c76084-401b-41ca-ad08-2752d2d7132b\") " Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.250299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.250509 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.256149 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c76084-401b-41ca-ad08-2752d2d7132b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.256364 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.257486 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.259289 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-kube-api-access-pt45r" (OuterVolumeSpecName: "kube-api-access-pt45r") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "kube-api-access-pt45r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.265877 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.288311 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c76084-401b-41ca-ad08-2752d2d7132b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "20c76084-401b-41ca-ad08-2752d2d7132b" (UID: "20c76084-401b-41ca-ad08-2752d2d7132b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.350903 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20c76084-401b-41ca-ad08-2752d2d7132b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.350955 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.350979 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.350996 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20c76084-401b-41ca-ad08-2752d2d7132b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.351013 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20c76084-401b-41ca-ad08-2752d2d7132b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.351030 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.351047 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt45r\" (UniqueName: \"kubernetes.io/projected/20c76084-401b-41ca-ad08-2752d2d7132b-kube-api-access-pt45r\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.581423 4811 generic.go:334] "Generic (PLEG): container finished" podID="20c76084-401b-41ca-ad08-2752d2d7132b" containerID="0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f" exitCode=0 Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.581486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" event={"ID":"20c76084-401b-41ca-ad08-2752d2d7132b","Type":"ContainerDied","Data":"0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f"} Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.581534 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" event={"ID":"20c76084-401b-41ca-ad08-2752d2d7132b","Type":"ContainerDied","Data":"b368117ba3aebfba02513d6a32c5ca5f0ebfa0c43e99fc8257a241b99c5220d1"} Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.581581 4811 scope.go:117] "RemoveContainer" containerID="0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.582179 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z425" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.609149 4811 scope.go:117] "RemoveContainer" containerID="0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f" Feb 16 21:03:11 crc kubenswrapper[4811]: E0216 21:03:11.610159 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f\": container with ID starting with 0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f not found: ID does not exist" containerID="0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.610283 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f"} err="failed to get container status \"0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f\": rpc error: code = NotFound desc = could not find container \"0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f\": container with ID starting with 0e8cd14c8118cc7a30a42efbefa131e497495d5afcac5a4801f4dbbb4fe3dd1f not found: ID does not exist" Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.644388 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z425"] Feb 16 21:03:11 crc kubenswrapper[4811]: I0216 21:03:11.654008 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z425"] Feb 16 21:03:12 crc kubenswrapper[4811]: I0216 21:03:12.717702 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c76084-401b-41ca-ad08-2752d2d7132b" path="/var/lib/kubelet/pods/20c76084-401b-41ca-ad08-2752d2d7132b/volumes" Feb 16 21:03:22 crc kubenswrapper[4811]: I0216 21:03:22.940600 4811 scope.go:117] "RemoveContainer" containerID="eb433b149397712d2038a11fdd1bade053dff611fc9f72b04d554ac16a479858" Feb 16 21:03:22 crc kubenswrapper[4811]: I0216 21:03:22.964393 4811 scope.go:117] "RemoveContainer" containerID="9f5148a5ffdf417f4431aae45053dff7a2d093621f416a688d1d76690440ac30" Feb 16 21:03:22 crc kubenswrapper[4811]: I0216 21:03:22.982604 4811 scope.go:117] "RemoveContainer" containerID="a0ce51b75b64d5b4c5e84dafe3c092c838a653072eafcb5db877c69363610e98" Feb 16 21:03:23 crc kubenswrapper[4811]: I0216 21:03:23.008949 4811 scope.go:117] "RemoveContainer" containerID="51da2d75eac159cc50fed4692e024a1ef806614e23235d7818c113e3659b1f49" Feb 16 21:03:23 crc kubenswrapper[4811]: I0216 21:03:23.028964 4811 scope.go:117] "RemoveContainer" containerID="03e2520715019e952883ae03d9100d926eddfbea2b162533b115e73a180157be" Feb 16 21:03:23 crc kubenswrapper[4811]: I0216 21:03:23.050541 4811 scope.go:117] "RemoveContainer" containerID="e3a5e627c605f0107ea475b7219609b25ec387bfe8e4898d7024ad7d7e2be11e" Feb 16 21:04:48 crc kubenswrapper[4811]: I0216 21:04:48.364284 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:04:48 crc kubenswrapper[4811]: I0216 21:04:48.364918 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.649536 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp"] Feb 16 21:05:03 crc kubenswrapper[4811]: E0216 21:05:03.650665 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c76084-401b-41ca-ad08-2752d2d7132b" containerName="registry" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.650693 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c76084-401b-41ca-ad08-2752d2d7132b" containerName="registry" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.651003 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c76084-401b-41ca-ad08-2752d2d7132b" containerName="registry" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.652986 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.656717 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.666640 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp"] Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.690834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.691241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsndg\" (UniqueName: \"kubernetes.io/projected/578040b1-e6b6-4064-a8fc-ee5635df7eee-kube-api-access-fsndg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.691607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.793504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsndg\" (UniqueName: \"kubernetes.io/projected/578040b1-e6b6-4064-a8fc-ee5635df7eee-kube-api-access-fsndg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.794515 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.795461 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.796980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.797896 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.831054 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsndg\" (UniqueName: \"kubernetes.io/projected/578040b1-e6b6-4064-a8fc-ee5635df7eee-kube-api-access-fsndg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:03 crc kubenswrapper[4811]: I0216 21:05:03.975952 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:04 crc kubenswrapper[4811]: I0216 21:05:04.277217 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp"] Feb 16 21:05:04 crc kubenswrapper[4811]: I0216 21:05:04.348889 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" event={"ID":"578040b1-e6b6-4064-a8fc-ee5635df7eee","Type":"ContainerStarted","Data":"e35eb21d76cb5b6af12cd9a1aa6ca0b2815ed091c2713cf87afc3da9f2fc0c6c"} Feb 16 21:05:05 crc kubenswrapper[4811]: I0216 21:05:05.363063 4811 generic.go:334] "Generic (PLEG): container finished" podID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerID="5dc5d83f6026368e190d37fff9b01499b9eeccdf57cebaf469fda7013394dbce" exitCode=0 Feb 16 21:05:05 crc kubenswrapper[4811]: I0216 21:05:05.363164 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" event={"ID":"578040b1-e6b6-4064-a8fc-ee5635df7eee","Type":"ContainerDied","Data":"5dc5d83f6026368e190d37fff9b01499b9eeccdf57cebaf469fda7013394dbce"} Feb 16 21:05:05 crc kubenswrapper[4811]: I0216 21:05:05.365319 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:05:07 crc kubenswrapper[4811]: I0216 21:05:07.380328 4811 generic.go:334] "Generic (PLEG): container finished" podID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerID="f3601c99a32ed8a5f238a8a4d049812855088206bbda0f6dfab7f686615f6f3f" exitCode=0 Feb 16 21:05:07 crc kubenswrapper[4811]: I0216 21:05:07.380396 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" event={"ID":"578040b1-e6b6-4064-a8fc-ee5635df7eee","Type":"ContainerDied","Data":"f3601c99a32ed8a5f238a8a4d049812855088206bbda0f6dfab7f686615f6f3f"} Feb 16 21:05:08 crc kubenswrapper[4811]: I0216 21:05:08.392111 4811 generic.go:334] "Generic (PLEG): container finished" podID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerID="90bfe53ad5ed9e216cf21ad54be3499ddbf184e987df0c7fb27105ea78bb00c5" exitCode=0 Feb 16 21:05:08 crc kubenswrapper[4811]: I0216 21:05:08.393042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" event={"ID":"578040b1-e6b6-4064-a8fc-ee5635df7eee","Type":"ContainerDied","Data":"90bfe53ad5ed9e216cf21ad54be3499ddbf184e987df0c7fb27105ea78bb00c5"} Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.673910 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.793914 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-bundle\") pod \"578040b1-e6b6-4064-a8fc-ee5635df7eee\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.794047 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsndg\" (UniqueName: \"kubernetes.io/projected/578040b1-e6b6-4064-a8fc-ee5635df7eee-kube-api-access-fsndg\") pod \"578040b1-e6b6-4064-a8fc-ee5635df7eee\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.794154 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-util\") pod \"578040b1-e6b6-4064-a8fc-ee5635df7eee\" (UID: \"578040b1-e6b6-4064-a8fc-ee5635df7eee\") " Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.797586 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-bundle" (OuterVolumeSpecName: "bundle") pod "578040b1-e6b6-4064-a8fc-ee5635df7eee" (UID: "578040b1-e6b6-4064-a8fc-ee5635df7eee"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.802006 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578040b1-e6b6-4064-a8fc-ee5635df7eee-kube-api-access-fsndg" (OuterVolumeSpecName: "kube-api-access-fsndg") pod "578040b1-e6b6-4064-a8fc-ee5635df7eee" (UID: "578040b1-e6b6-4064-a8fc-ee5635df7eee"). InnerVolumeSpecName "kube-api-access-fsndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.896059 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsndg\" (UniqueName: \"kubernetes.io/projected/578040b1-e6b6-4064-a8fc-ee5635df7eee-kube-api-access-fsndg\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:09 crc kubenswrapper[4811]: I0216 21:05:09.896122 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:10 crc kubenswrapper[4811]: I0216 21:05:10.066598 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-util" (OuterVolumeSpecName: "util") pod "578040b1-e6b6-4064-a8fc-ee5635df7eee" (UID: "578040b1-e6b6-4064-a8fc-ee5635df7eee"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:10 crc kubenswrapper[4811]: I0216 21:05:10.100141 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/578040b1-e6b6-4064-a8fc-ee5635df7eee-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:10 crc kubenswrapper[4811]: I0216 21:05:10.408827 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" event={"ID":"578040b1-e6b6-4064-a8fc-ee5635df7eee","Type":"ContainerDied","Data":"e35eb21d76cb5b6af12cd9a1aa6ca0b2815ed091c2713cf87afc3da9f2fc0c6c"} Feb 16 21:05:10 crc kubenswrapper[4811]: I0216 21:05:10.408891 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35eb21d76cb5b6af12cd9a1aa6ca0b2815ed091c2713cf87afc3da9f2fc0c6c" Feb 16 21:05:10 crc kubenswrapper[4811]: I0216 21:05:10.408929 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.651299 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x2ggt"] Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652213 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="northd" containerID="cri-o://a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652282 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-acl-logging" containerID="cri-o://bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652250 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="sbdb" containerID="cri-o://0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652311 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652216 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-node" containerID="cri-o://f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652395 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="nbdb" containerID="cri-o://fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.652415 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-controller" containerID="cri-o://b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.682944 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" containerID="cri-o://23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" gracePeriod=30 Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.810837 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.817342 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.817935 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.819617 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.819642 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.819714 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="sbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.822590 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.822633 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="nbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.930529 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/3.log" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.933429 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovn-acl-logging/0.log" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.933952 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovn-controller/0.log" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.934451 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996170 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lhtp5"] Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996458 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996471 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996478 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="extract" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996484 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="extract" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996495 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996501 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996508 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="northd" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996514 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="northd" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996521 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996528 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996539 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="pull" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996545 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="pull" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996552 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="util" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996558 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="util" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996568 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996574 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996584 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kubecfg-setup" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996589 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kubecfg-setup" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996598 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="nbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996605 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="nbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996613 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-acl-logging" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996618 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-acl-logging" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996624 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-node" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996630 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-node" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996641 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996646 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996654 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="sbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996659 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="sbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996752 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="northd" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996761 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996769 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="578040b1-e6b6-4064-a8fc-ee5635df7eee" containerName="extract" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996775 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996782 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="sbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996788 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996796 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-node" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996803 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996810 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-acl-logging" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996819 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="nbdb" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996826 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovn-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996916 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996924 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: E0216 21:05:14.996935 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.996941 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.997025 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.997177 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerName="ovnkube-controller" Feb 16 21:05:14 crc kubenswrapper[4811]: I0216 21:05:14.998475 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080394 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-var-lib-openvswitch\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080445 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-node-log\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080474 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-openvswitch\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080507 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-script-lib\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-etc-openvswitch\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080538 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080556 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-ovn\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080578 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-netd\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080599 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-systemd\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080588 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-node-log" (OuterVolumeSpecName: "node-log") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080655 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-slash" (OuterVolumeSpecName: "host-slash") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080622 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-slash\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080686 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-bin\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080738 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-env-overrides\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080769 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-config\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080785 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-netns\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080805 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-kubelet\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080828 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080862 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-ovn-kubernetes\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080889 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-log-socket\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080919 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovn-node-metrics-cert\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080937 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-systemd-units\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080960 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmx4\" (UniqueName: \"kubernetes.io/projected/e1bbcd0c-f192-4210-831c-82e87a4768a7-kube-api-access-8hmx4\") pod \"e1bbcd0c-f192-4210-831c-82e87a4768a7\" (UID: \"e1bbcd0c-f192-4210-831c-82e87a4768a7\") " Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.080967 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081002 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081026 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081047 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081067 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081083 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-etc-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081118 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5m6\" (UniqueName: \"kubernetes.io/projected/e3e06004-9d25-4d9c-b66e-b537cfd21bac-kube-api-access-lk5m6\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081160 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081183 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-systemd-units\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081230 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081264 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-env-overrides\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081285 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-cni-netd\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081323 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovnkube-script-lib\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081335 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081345 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-ovn\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081366 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovnkube-config\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081402 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081424 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-kubelet\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081534 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-node-log\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081553 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-slash\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-log-socket\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-var-lib-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081631 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-cni-bin\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081654 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovn-node-metrics-cert\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081679 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-run-ovn-kubernetes\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081701 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-systemd\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-run-netns\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081773 4811 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081788 4811 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081799 4811 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081810 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081820 4811 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081831 4811 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081842 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081853 4811 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081864 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081875 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081887 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081899 4811 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081910 4811 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081923 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081628 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.081993 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-log-socket" (OuterVolumeSpecName: "log-socket") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.082017 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.088873 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1bbcd0c-f192-4210-831c-82e87a4768a7-kube-api-access-8hmx4" (OuterVolumeSpecName: "kube-api-access-8hmx4") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "kube-api-access-8hmx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.093784 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.098503 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e1bbcd0c-f192-4210-831c-82e87a4768a7" (UID: "e1bbcd0c-f192-4210-831c-82e87a4768a7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.182860 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-etc-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.182948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5m6\" (UniqueName: \"kubernetes.io/projected/e3e06004-9d25-4d9c-b66e-b537cfd21bac-kube-api-access-lk5m6\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183001 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-systemd-units\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183125 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-env-overrides\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183270 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-cni-netd\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183305 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovnkube-script-lib\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183345 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-ovn\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovnkube-config\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183409 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-systemd-units\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183428 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-kubelet\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183485 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-kubelet\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-node-log\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183519 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-slash\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183545 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-log-socket\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183550 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183564 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-var-lib-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183602 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-cni-bin\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183631 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovn-node-metrics-cert\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183673 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-run-ovn-kubernetes\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183709 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-systemd\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-run-netns\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183815 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183830 4811 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183842 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1bbcd0c-f192-4210-831c-82e87a4768a7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183856 4811 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183868 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmx4\" (UniqueName: \"kubernetes.io/projected/e1bbcd0c-f192-4210-831c-82e87a4768a7-kube-api-access-8hmx4\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183881 4811 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1bbcd0c-f192-4210-831c-82e87a4768a7-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183915 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-run-netns\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.183945 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-etc-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184343 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184389 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-node-log\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184441 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-slash\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184477 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-log-socket\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184518 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-var-lib-openvswitch\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184519 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-env-overrides\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184555 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-cni-bin\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.184602 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-cni-netd\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.185379 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-host-run-ovn-kubernetes\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.185446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-systemd\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.185499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3e06004-9d25-4d9c-b66e-b537cfd21bac-run-ovn\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.186817 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovnkube-config\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.187534 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovnkube-script-lib\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.188222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3e06004-9d25-4d9c-b66e-b537cfd21bac-ovn-node-metrics-cert\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.206796 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5m6\" (UniqueName: \"kubernetes.io/projected/e3e06004-9d25-4d9c-b66e-b537cfd21bac-kube-api-access-lk5m6\") pod \"ovnkube-node-lhtp5\" (UID: \"e3e06004-9d25-4d9c-b66e-b537cfd21bac\") " pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.314393 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.442243 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/2.log" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.443239 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/1.log" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.443328 4811 generic.go:334] "Generic (PLEG): container finished" podID="a946fefd-e014-48b1-995b-ef221a88bc73" containerID="bf50f864995f5e7737f081953d628014fddf69c787e71973d21b61c272b0a372" exitCode=2 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.443421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerDied","Data":"bf50f864995f5e7737f081953d628014fddf69c787e71973d21b61c272b0a372"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.443485 4811 scope.go:117] "RemoveContainer" containerID="276a19c80bef50556fb786571f8b1c5f5d2a798fa193fc5854a3cafa254b32c8" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.445162 4811 scope.go:117] "RemoveContainer" containerID="bf50f864995f5e7737f081953d628014fddf69c787e71973d21b61c272b0a372" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.445406 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mgctp_openshift-multus(a946fefd-e014-48b1-995b-ef221a88bc73)\"" pod="openshift-multus/multus-mgctp" podUID="a946fefd-e014-48b1-995b-ef221a88bc73" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.451126 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovnkube-controller/3.log" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.459278 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovn-acl-logging/0.log" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.460242 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x2ggt_e1bbcd0c-f192-4210-831c-82e87a4768a7/ovn-controller/0.log" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462175 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" exitCode=0 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462256 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" exitCode=0 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462276 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" exitCode=0 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462293 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" exitCode=0 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462310 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" exitCode=0 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462328 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" exitCode=0 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462341 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" exitCode=143 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462355 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1bbcd0c-f192-4210-831c-82e87a4768a7" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" exitCode=143 Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462607 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462620 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462651 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462667 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462682 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462696 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462733 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462742 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462749 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462757 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462765 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462772 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462780 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462858 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462891 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462906 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462922 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462932 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462941 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462977 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462987 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.462995 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463007 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463015 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463023 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463029 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463080 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463095 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463104 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463112 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463122 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463131 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463164 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463175 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463185 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463228 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463269 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463283 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2ggt" event={"ID":"e1bbcd0c-f192-4210-831c-82e87a4768a7","Type":"ContainerDied","Data":"29472bcfd2ec457b40cacc17f1865ee2e7ec33788f857f742628e8b9ff741552"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463323 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463332 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463339 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463347 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463354 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463361 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463389 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463396 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463404 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.463410 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.468886 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"06074e6518d241b097ec854695829158b1ee737a66701d6aa767f920e4b88258"} Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.607671 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x2ggt"] Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.619893 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x2ggt"] Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.628219 4811 scope.go:117] "RemoveContainer" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.655119 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.678057 4811 scope.go:117] "RemoveContainer" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.692415 4811 scope.go:117] "RemoveContainer" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.707671 4811 scope.go:117] "RemoveContainer" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.750671 4811 scope.go:117] "RemoveContainer" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.763794 4811 scope.go:117] "RemoveContainer" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.813814 4811 scope.go:117] "RemoveContainer" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.830903 4811 scope.go:117] "RemoveContainer" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.847127 4811 scope.go:117] "RemoveContainer" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.864758 4811 scope.go:117] "RemoveContainer" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.865570 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": container with ID starting with 23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d not found: ID does not exist" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.865611 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} err="failed to get container status \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": rpc error: code = NotFound desc = could not find container \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": container with ID starting with 23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.865647 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.866027 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": container with ID starting with 6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69 not found: ID does not exist" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.866064 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} err="failed to get container status \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": rpc error: code = NotFound desc = could not find container \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": container with ID starting with 6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.866104 4811 scope.go:117] "RemoveContainer" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.866465 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": container with ID starting with 0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e not found: ID does not exist" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.866506 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} err="failed to get container status \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": rpc error: code = NotFound desc = could not find container \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": container with ID starting with 0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.866539 4811 scope.go:117] "RemoveContainer" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.866905 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": container with ID starting with fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921 not found: ID does not exist" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.866938 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} err="failed to get container status \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": rpc error: code = NotFound desc = could not find container \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": container with ID starting with fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.866976 4811 scope.go:117] "RemoveContainer" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.867270 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": container with ID starting with a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819 not found: ID does not exist" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.867291 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} err="failed to get container status \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": rpc error: code = NotFound desc = could not find container \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": container with ID starting with a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.867304 4811 scope.go:117] "RemoveContainer" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.867547 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": container with ID starting with 83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc not found: ID does not exist" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.867568 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} err="failed to get container status \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": rpc error: code = NotFound desc = could not find container \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": container with ID starting with 83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.867584 4811 scope.go:117] "RemoveContainer" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.867855 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": container with ID starting with f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7 not found: ID does not exist" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.867889 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} err="failed to get container status \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": rpc error: code = NotFound desc = could not find container \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": container with ID starting with f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.867909 4811 scope.go:117] "RemoveContainer" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.868174 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": container with ID starting with bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4 not found: ID does not exist" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.868216 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} err="failed to get container status \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": rpc error: code = NotFound desc = could not find container \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": container with ID starting with bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.868233 4811 scope.go:117] "RemoveContainer" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.868464 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": container with ID starting with b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f not found: ID does not exist" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.868491 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} err="failed to get container status \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": rpc error: code = NotFound desc = could not find container \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": container with ID starting with b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.868507 4811 scope.go:117] "RemoveContainer" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" Feb 16 21:05:15 crc kubenswrapper[4811]: E0216 21:05:15.868736 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": container with ID starting with 29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db not found: ID does not exist" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.868790 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} err="failed to get container status \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": rpc error: code = NotFound desc = could not find container \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": container with ID starting with 29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.868811 4811 scope.go:117] "RemoveContainer" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.869055 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} err="failed to get container status \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": rpc error: code = NotFound desc = could not find container \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": container with ID starting with 23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.869081 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.869497 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} err="failed to get container status \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": rpc error: code = NotFound desc = could not find container \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": container with ID starting with 6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.869526 4811 scope.go:117] "RemoveContainer" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.869820 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} err="failed to get container status \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": rpc error: code = NotFound desc = could not find container \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": container with ID starting with 0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.869846 4811 scope.go:117] "RemoveContainer" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.870173 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} err="failed to get container status \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": rpc error: code = NotFound desc = could not find container \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": container with ID starting with fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.870234 4811 scope.go:117] "RemoveContainer" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.870540 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} err="failed to get container status \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": rpc error: code = NotFound desc = could not find container \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": container with ID starting with a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.870560 4811 scope.go:117] "RemoveContainer" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.870856 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} err="failed to get container status \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": rpc error: code = NotFound desc = could not find container \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": container with ID starting with 83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.870892 4811 scope.go:117] "RemoveContainer" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.871217 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} err="failed to get container status \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": rpc error: code = NotFound desc = could not find container \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": container with ID starting with f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.871237 4811 scope.go:117] "RemoveContainer" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.871556 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} err="failed to get container status \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": rpc error: code = NotFound desc = could not find container \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": container with ID starting with bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.871579 4811 scope.go:117] "RemoveContainer" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.872375 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} err="failed to get container status \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": rpc error: code = NotFound desc = could not find container \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": container with ID starting with b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.872403 4811 scope.go:117] "RemoveContainer" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.872663 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} err="failed to get container status \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": rpc error: code = NotFound desc = could not find container \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": container with ID starting with 29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.872680 4811 scope.go:117] "RemoveContainer" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.872980 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} err="failed to get container status \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": rpc error: code = NotFound desc = could not find container \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": container with ID starting with 23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873027 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873401 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} err="failed to get container status \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": rpc error: code = NotFound desc = could not find container \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": container with ID starting with 6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873425 4811 scope.go:117] "RemoveContainer" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873670 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} err="failed to get container status \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": rpc error: code = NotFound desc = could not find container \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": container with ID starting with 0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873686 4811 scope.go:117] "RemoveContainer" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873958 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} err="failed to get container status \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": rpc error: code = NotFound desc = could not find container \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": container with ID starting with fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.873982 4811 scope.go:117] "RemoveContainer" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.874650 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} err="failed to get container status \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": rpc error: code = NotFound desc = could not find container \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": container with ID starting with a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.874675 4811 scope.go:117] "RemoveContainer" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.874913 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} err="failed to get container status \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": rpc error: code = NotFound desc = could not find container \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": container with ID starting with 83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.874931 4811 scope.go:117] "RemoveContainer" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.875276 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} err="failed to get container status \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": rpc error: code = NotFound desc = could not find container \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": container with ID starting with f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.875301 4811 scope.go:117] "RemoveContainer" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.875544 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} err="failed to get container status \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": rpc error: code = NotFound desc = could not find container \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": container with ID starting with bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.875565 4811 scope.go:117] "RemoveContainer" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.875866 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} err="failed to get container status \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": rpc error: code = NotFound desc = could not find container \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": container with ID starting with b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.875912 4811 scope.go:117] "RemoveContainer" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.877739 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} err="failed to get container status \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": rpc error: code = NotFound desc = could not find container \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": container with ID starting with 29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.877768 4811 scope.go:117] "RemoveContainer" containerID="23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878092 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d"} err="failed to get container status \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": rpc error: code = NotFound desc = could not find container \"23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d\": container with ID starting with 23b5c78b3723212410376aad7f8b0bff7e9de5801252ca5c32a1ca85c81d253d not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878142 4811 scope.go:117] "RemoveContainer" containerID="6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878432 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69"} err="failed to get container status \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": rpc error: code = NotFound desc = could not find container \"6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69\": container with ID starting with 6cee4aa612b887c2a26175ea413d8dd81131edeadf350f04e47f1a5488f6de69 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878470 4811 scope.go:117] "RemoveContainer" containerID="0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878685 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e"} err="failed to get container status \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": rpc error: code = NotFound desc = could not find container \"0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e\": container with ID starting with 0985d21cd7089b4c5dc4d6968ef80d0b9fa382812319cbb6ac66cb4ccbd2c78e not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878706 4811 scope.go:117] "RemoveContainer" containerID="fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878898 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921"} err="failed to get container status \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": rpc error: code = NotFound desc = could not find container \"fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921\": container with ID starting with fcc5778f8ebe41383882cd0a4d76beddb1b495b8fa14cc4352b363587e10b921 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.878921 4811 scope.go:117] "RemoveContainer" containerID="a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879146 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819"} err="failed to get container status \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": rpc error: code = NotFound desc = could not find container \"a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819\": container with ID starting with a18648419aa7641be7f8254099f5eebbd99e48ceb1976b70b535e08718309819 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879166 4811 scope.go:117] "RemoveContainer" containerID="83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879514 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc"} err="failed to get container status \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": rpc error: code = NotFound desc = could not find container \"83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc\": container with ID starting with 83c404fad38ff5921ecc81ffd6f1e0ecd9682bc1f9e7cf9649f7cb360accd7fc not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879538 4811 scope.go:117] "RemoveContainer" containerID="f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879736 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7"} err="failed to get container status \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": rpc error: code = NotFound desc = could not find container \"f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7\": container with ID starting with f07ca0c3fcb487c53ff4f31896083d9a747e6363992b4a8898c3fcecf7d700c7 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879762 4811 scope.go:117] "RemoveContainer" containerID="bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.879996 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4"} err="failed to get container status \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": rpc error: code = NotFound desc = could not find container \"bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4\": container with ID starting with bd2d9d3a5286c9efb0d44aa5dd6170f9b3d5778f12524935e7b4fec929ea53a4 not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.880025 4811 scope.go:117] "RemoveContainer" containerID="b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.880339 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f"} err="failed to get container status \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": rpc error: code = NotFound desc = could not find container \"b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f\": container with ID starting with b4fff69ad4e673c649626f2151445409db3d89e3171823ab9f8cc737eb04b55f not found: ID does not exist" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.880368 4811 scope.go:117] "RemoveContainer" containerID="29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db" Feb 16 21:05:15 crc kubenswrapper[4811]: I0216 21:05:15.880644 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db"} err="failed to get container status \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": rpc error: code = NotFound desc = could not find container \"29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db\": container with ID starting with 29fc0a4a4cdb1c633f2e26d58c204c4cc4b098a59d7ae1b1b22d33575a49a0db not found: ID does not exist" Feb 16 21:05:16 crc kubenswrapper[4811]: I0216 21:05:16.477933 4811 generic.go:334] "Generic (PLEG): container finished" podID="e3e06004-9d25-4d9c-b66e-b537cfd21bac" containerID="b2448f6b23c06eb5dc942e3739d480cfc11319f8c4663e6e36471392b29989aa" exitCode=0 Feb 16 21:05:16 crc kubenswrapper[4811]: I0216 21:05:16.478027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerDied","Data":"b2448f6b23c06eb5dc942e3739d480cfc11319f8c4663e6e36471392b29989aa"} Feb 16 21:05:16 crc kubenswrapper[4811]: I0216 21:05:16.481452 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/2.log" Feb 16 21:05:16 crc kubenswrapper[4811]: I0216 21:05:16.710080 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1bbcd0c-f192-4210-831c-82e87a4768a7" path="/var/lib/kubelet/pods/e1bbcd0c-f192-4210-831c-82e87a4768a7/volumes" Feb 16 21:05:17 crc kubenswrapper[4811]: I0216 21:05:17.492302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"5ad4f6c88c573cb3fce8508721f09164b4c1b7daa8099276ea24aa5fd8e1c17d"} Feb 16 21:05:17 crc kubenswrapper[4811]: I0216 21:05:17.492795 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"c418c5fdd5705d3144605db09a54628e8bf9b0dc87b49d96934781381f59cf29"} Feb 16 21:05:17 crc kubenswrapper[4811]: I0216 21:05:17.492811 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"7b6092b3aab2ad8a2658cac8966025feb6f2bd3478d0f752a1f42e8bffff4c17"} Feb 16 21:05:17 crc kubenswrapper[4811]: I0216 21:05:17.492822 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"0b76dec0d015985cecb50db5098838d43a4abd38716fc556cfe55f8c85abe373"} Feb 16 21:05:17 crc kubenswrapper[4811]: I0216 21:05:17.492837 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"5801c72ed892f0eda2a6012f9bf1679ecc5f3a60263d3cccd53f757dab09477b"} Feb 16 21:05:17 crc kubenswrapper[4811]: I0216 21:05:17.492848 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"b3485300af4f2e4b3e7a540f4be6b5abcc1dce0b8795aa17af1e5ee96c18b130"} Feb 16 21:05:18 crc kubenswrapper[4811]: I0216 21:05:18.364355 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:05:18 crc kubenswrapper[4811]: I0216 21:05:18.364700 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:05:20 crc kubenswrapper[4811]: I0216 21:05:20.514147 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"b192712d6ee1e4425a05e5475777b4745f318bd7a2b73e91e6713400c2c56608"} Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.492826 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj"] Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.493618 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.495414 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-vf5m9" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.495925 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.497389 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.580980 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5s6r\" (UniqueName: \"kubernetes.io/projected/dc3ef150-066b-4fd0-bff2-4606e25694e4-kube-api-access-q5s6r\") pod \"obo-prometheus-operator-68bc856cb9-88jqj\" (UID: \"dc3ef150-066b-4fd0-bff2-4606e25694e4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.626566 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs"] Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.627307 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.634094 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4"] Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.635004 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.636454 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.640568 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dp4qn" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.682657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5s6r\" (UniqueName: \"kubernetes.io/projected/dc3ef150-066b-4fd0-bff2-4606e25694e4-kube-api-access-q5s6r\") pod \"obo-prometheus-operator-68bc856cb9-88jqj\" (UID: \"dc3ef150-066b-4fd0-bff2-4606e25694e4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.699714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5s6r\" (UniqueName: \"kubernetes.io/projected/dc3ef150-066b-4fd0-bff2-4606e25694e4-kube-api-access-q5s6r\") pod \"obo-prometheus-operator-68bc856cb9-88jqj\" (UID: \"dc3ef150-066b-4fd0-bff2-4606e25694e4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.728257 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4z4hh"] Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.728927 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.731308 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.732417 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ht9sx" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.783385 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbbd192e-9df8-40ec-9397-f9eebf6b9111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs\" (UID: \"dbbd192e-9df8-40ec-9397-f9eebf6b9111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.783437 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbbd192e-9df8-40ec-9397-f9eebf6b9111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs\" (UID: \"dbbd192e-9df8-40ec-9397-f9eebf6b9111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.783483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b00ab7-c05d-40bc-a605-10d2bc710ec5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4\" (UID: \"42b00ab7-c05d-40bc-a605-10d2bc710ec5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.783536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b00ab7-c05d-40bc-a605-10d2bc710ec5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4\" (UID: \"42b00ab7-c05d-40bc-a605-10d2bc710ec5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.813929 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.836160 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(99174a2d6916197a5e843bfbb230534bd5db8a3b5ea68d58b2e8b4825901d036): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.836309 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(99174a2d6916197a5e843bfbb230534bd5db8a3b5ea68d58b2e8b4825901d036): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.836585 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(99174a2d6916197a5e843bfbb230534bd5db8a3b5ea68d58b2e8b4825901d036): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.836701 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators(dc3ef150-066b-4fd0-bff2-4606e25694e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators(dc3ef150-066b-4fd0-bff2-4606e25694e4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(99174a2d6916197a5e843bfbb230534bd5db8a3b5ea68d58b2e8b4825901d036): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" podUID="dc3ef150-066b-4fd0-bff2-4606e25694e4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.885011 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbbd192e-9df8-40ec-9397-f9eebf6b9111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs\" (UID: \"dbbd192e-9df8-40ec-9397-f9eebf6b9111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.885057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbbd192e-9df8-40ec-9397-f9eebf6b9111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs\" (UID: \"dbbd192e-9df8-40ec-9397-f9eebf6b9111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.885083 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw2zg\" (UniqueName: \"kubernetes.io/projected/bba265f5-85c6-4130-a470-839286f95d5b-kube-api-access-dw2zg\") pod \"observability-operator-59bdc8b94-4z4hh\" (UID: \"bba265f5-85c6-4130-a470-839286f95d5b\") " pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.885108 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b00ab7-c05d-40bc-a605-10d2bc710ec5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4\" (UID: \"42b00ab7-c05d-40bc-a605-10d2bc710ec5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.885144 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b00ab7-c05d-40bc-a605-10d2bc710ec5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4\" (UID: \"42b00ab7-c05d-40bc-a605-10d2bc710ec5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.885173 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba265f5-85c6-4130-a470-839286f95d5b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4z4hh\" (UID: \"bba265f5-85c6-4130-a470-839286f95d5b\") " pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.891681 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbbd192e-9df8-40ec-9397-f9eebf6b9111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs\" (UID: \"dbbd192e-9df8-40ec-9397-f9eebf6b9111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.892499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b00ab7-c05d-40bc-a605-10d2bc710ec5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4\" (UID: \"42b00ab7-c05d-40bc-a605-10d2bc710ec5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.894768 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbbd192e-9df8-40ec-9397-f9eebf6b9111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs\" (UID: \"dbbd192e-9df8-40ec-9397-f9eebf6b9111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.894879 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b00ab7-c05d-40bc-a605-10d2bc710ec5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4\" (UID: \"42b00ab7-c05d-40bc-a605-10d2bc710ec5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.908787 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8r4zv"] Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.909479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.911435 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-29fw4" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.944728 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.952991 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.970429 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2873ae4827a14b0a7dcaaf02c10d9608e831168e9b0a5686b4f893895f3e19fd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.970498 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2873ae4827a14b0a7dcaaf02c10d9608e831168e9b0a5686b4f893895f3e19fd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.970529 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2873ae4827a14b0a7dcaaf02c10d9608e831168e9b0a5686b4f893895f3e19fd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.970601 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators(dbbd192e-9df8-40ec-9397-f9eebf6b9111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators(dbbd192e-9df8-40ec-9397-f9eebf6b9111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2873ae4827a14b0a7dcaaf02c10d9608e831168e9b0a5686b4f893895f3e19fd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" podUID="dbbd192e-9df8-40ec-9397-f9eebf6b9111" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.982694 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(f897c229b3b350a48c4bd472a1022f6490782c34a5e1f270dae3391ac8e64891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.982748 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(f897c229b3b350a48c4bd472a1022f6490782c34a5e1f270dae3391ac8e64891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.982785 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(f897c229b3b350a48c4bd472a1022f6490782c34a5e1f270dae3391ac8e64891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:21 crc kubenswrapper[4811]: E0216 21:05:21.982831 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators(42b00ab7-c05d-40bc-a605-10d2bc710ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators(42b00ab7-c05d-40bc-a605-10d2bc710ec5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(f897c229b3b350a48c4bd472a1022f6490782c34a5e1f270dae3391ac8e64891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" podUID="42b00ab7-c05d-40bc-a605-10d2bc710ec5" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.985829 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf67r\" (UniqueName: \"kubernetes.io/projected/fa4598ee-dd6b-40e2-a925-71d9e3e6c17a-kube-api-access-cf67r\") pod \"perses-operator-5bf474d74f-8r4zv\" (UID: \"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a\") " pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.985876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw2zg\" (UniqueName: \"kubernetes.io/projected/bba265f5-85c6-4130-a470-839286f95d5b-kube-api-access-dw2zg\") pod \"observability-operator-59bdc8b94-4z4hh\" (UID: \"bba265f5-85c6-4130-a470-839286f95d5b\") " pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.985929 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4598ee-dd6b-40e2-a925-71d9e3e6c17a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8r4zv\" (UID: \"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a\") " pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.985957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba265f5-85c6-4130-a470-839286f95d5b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4z4hh\" (UID: \"bba265f5-85c6-4130-a470-839286f95d5b\") " pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:21 crc kubenswrapper[4811]: I0216 21:05:21.989741 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba265f5-85c6-4130-a470-839286f95d5b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4z4hh\" (UID: \"bba265f5-85c6-4130-a470-839286f95d5b\") " pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.003002 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw2zg\" (UniqueName: \"kubernetes.io/projected/bba265f5-85c6-4130-a470-839286f95d5b-kube-api-access-dw2zg\") pod \"observability-operator-59bdc8b94-4z4hh\" (UID: \"bba265f5-85c6-4130-a470-839286f95d5b\") " pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.076730 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.087470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4598ee-dd6b-40e2-a925-71d9e3e6c17a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8r4zv\" (UID: \"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a\") " pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.087545 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf67r\" (UniqueName: \"kubernetes.io/projected/fa4598ee-dd6b-40e2-a925-71d9e3e6c17a-kube-api-access-cf67r\") pod \"perses-operator-5bf474d74f-8r4zv\" (UID: \"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a\") " pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.088460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4598ee-dd6b-40e2-a925-71d9e3e6c17a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8r4zv\" (UID: \"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a\") " pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.098059 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(3f753462dd6fe6b480dfe88acc938adc0046341192ded1bc1a0a7d2c6740ef81): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.098156 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(3f753462dd6fe6b480dfe88acc938adc0046341192ded1bc1a0a7d2c6740ef81): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.098187 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(3f753462dd6fe6b480dfe88acc938adc0046341192ded1bc1a0a7d2c6740ef81): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.098312 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4z4hh_openshift-operators(bba265f5-85c6-4130-a470-839286f95d5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4z4hh_openshift-operators(bba265f5-85c6-4130-a470-839286f95d5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(3f753462dd6fe6b480dfe88acc938adc0046341192ded1bc1a0a7d2c6740ef81): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" podUID="bba265f5-85c6-4130-a470-839286f95d5b" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.112847 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf67r\" (UniqueName: \"kubernetes.io/projected/fa4598ee-dd6b-40e2-a925-71d9e3e6c17a-kube-api-access-cf67r\") pod \"perses-operator-5bf474d74f-8r4zv\" (UID: \"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a\") " pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.224648 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.251074 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(1d44582544e1d756643bed845b03c4a590b2d62706a0c410aabc3c130616eb60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.251173 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(1d44582544e1d756643bed845b03c4a590b2d62706a0c410aabc3c130616eb60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.251225 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(1d44582544e1d756643bed845b03c4a590b2d62706a0c410aabc3c130616eb60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.251295 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8r4zv_openshift-operators(fa4598ee-dd6b-40e2-a925-71d9e3e6c17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8r4zv_openshift-operators(fa4598ee-dd6b-40e2-a925-71d9e3e6c17a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(1d44582544e1d756643bed845b03c4a590b2d62706a0c410aabc3c130616eb60): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" podUID="fa4598ee-dd6b-40e2-a925-71d9e3e6c17a" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.534972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" event={"ID":"e3e06004-9d25-4d9c-b66e-b537cfd21bac","Type":"ContainerStarted","Data":"6e5af5276c64cafc09e49b8e61c3ba7963df0e8c90f062d996c51807e9799e3e"} Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.535307 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.535348 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.591776 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" podStartSLOduration=8.591761512 podStartE2EDuration="8.591761512s" podCreationTimestamp="2026-02-16 21:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:05:22.587682937 +0000 UTC m=+540.516978885" watchObservedRunningTime="2026-02-16 21:05:22.591761512 +0000 UTC m=+540.521057440" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.607816 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.812746 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8r4zv"] Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.812843 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.813229 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.817438 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs"] Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.817527 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.817979 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.820509 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4z4hh"] Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.820638 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.821165 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.830858 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj"] Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.830965 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.831456 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.852993 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4"] Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.853097 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:22 crc kubenswrapper[4811]: I0216 21:05:22.853495 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.877408 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(6f1ef2b11902ff7e4c4d2f4cdab94518fba75792647c62bc64440095f1512990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.877472 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(6f1ef2b11902ff7e4c4d2f4cdab94518fba75792647c62bc64440095f1512990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.877498 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(6f1ef2b11902ff7e4c4d2f4cdab94518fba75792647c62bc64440095f1512990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.877562 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8r4zv_openshift-operators(fa4598ee-dd6b-40e2-a925-71d9e3e6c17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8r4zv_openshift-operators(fa4598ee-dd6b-40e2-a925-71d9e3e6c17a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(6f1ef2b11902ff7e4c4d2f4cdab94518fba75792647c62bc64440095f1512990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" podUID="fa4598ee-dd6b-40e2-a925-71d9e3e6c17a" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888208 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(be78a58f4ba612c02cb86759e4a743c1f214a6d28f1d7668179009e509cf2e72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888272 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(be78a58f4ba612c02cb86759e4a743c1f214a6d28f1d7668179009e509cf2e72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888311 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(be78a58f4ba612c02cb86759e4a743c1f214a6d28f1d7668179009e509cf2e72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888361 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4z4hh_openshift-operators(bba265f5-85c6-4130-a470-839286f95d5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4z4hh_openshift-operators(bba265f5-85c6-4130-a470-839286f95d5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(be78a58f4ba612c02cb86759e4a743c1f214a6d28f1d7668179009e509cf2e72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" podUID="bba265f5-85c6-4130-a470-839286f95d5b" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888422 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(0d46084a0bae2cf9ce55a27342ffdd9b1ac7d93d02a64958fbde2022f4bb254b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888440 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(0d46084a0bae2cf9ce55a27342ffdd9b1ac7d93d02a64958fbde2022f4bb254b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888452 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(0d46084a0bae2cf9ce55a27342ffdd9b1ac7d93d02a64958fbde2022f4bb254b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.888474 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators(dc3ef150-066b-4fd0-bff2-4606e25694e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators(dc3ef150-066b-4fd0-bff2-4606e25694e4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(0d46084a0bae2cf9ce55a27342ffdd9b1ac7d93d02a64958fbde2022f4bb254b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" podUID="dc3ef150-066b-4fd0-bff2-4606e25694e4" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.893918 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(7f0398e4ba200a38b17957d5ce4d581cca1b3f6b152343e7adae0421723fb4da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.893975 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(7f0398e4ba200a38b17957d5ce4d581cca1b3f6b152343e7adae0421723fb4da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.894030 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(7f0398e4ba200a38b17957d5ce4d581cca1b3f6b152343e7adae0421723fb4da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.894085 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators(dbbd192e-9df8-40ec-9397-f9eebf6b9111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators(dbbd192e-9df8-40ec-9397-f9eebf6b9111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(7f0398e4ba200a38b17957d5ce4d581cca1b3f6b152343e7adae0421723fb4da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" podUID="dbbd192e-9df8-40ec-9397-f9eebf6b9111" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.905831 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(9ac78211706e0bdde3b6612390bb671fe7339beb0b8039d0a34de1956546d9b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.905908 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(9ac78211706e0bdde3b6612390bb671fe7339beb0b8039d0a34de1956546d9b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.905937 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(9ac78211706e0bdde3b6612390bb671fe7339beb0b8039d0a34de1956546d9b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:22 crc kubenswrapper[4811]: E0216 21:05:22.905987 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators(42b00ab7-c05d-40bc-a605-10d2bc710ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators(42b00ab7-c05d-40bc-a605-10d2bc710ec5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(9ac78211706e0bdde3b6612390bb671fe7339beb0b8039d0a34de1956546d9b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" podUID="42b00ab7-c05d-40bc-a605-10d2bc710ec5" Feb 16 21:05:23 crc kubenswrapper[4811]: I0216 21:05:23.539080 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:23 crc kubenswrapper[4811]: I0216 21:05:23.578324 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:28 crc kubenswrapper[4811]: I0216 21:05:28.703230 4811 scope.go:117] "RemoveContainer" containerID="bf50f864995f5e7737f081953d628014fddf69c787e71973d21b61c272b0a372" Feb 16 21:05:28 crc kubenswrapper[4811]: E0216 21:05:28.703851 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mgctp_openshift-multus(a946fefd-e014-48b1-995b-ef221a88bc73)\"" pod="openshift-multus/multus-mgctp" podUID="a946fefd-e014-48b1-995b-ef221a88bc73" Feb 16 21:05:33 crc kubenswrapper[4811]: I0216 21:05:33.702040 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:33 crc kubenswrapper[4811]: I0216 21:05:33.702577 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:33 crc kubenswrapper[4811]: E0216 21:05:33.733775 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(55deefa4c7abaeaa2f5d7ee50cf6f8c92d3cbd8db45bc163dddac7c3bbc1e487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:33 crc kubenswrapper[4811]: E0216 21:05:33.733856 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(55deefa4c7abaeaa2f5d7ee50cf6f8c92d3cbd8db45bc163dddac7c3bbc1e487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:33 crc kubenswrapper[4811]: E0216 21:05:33.733886 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(55deefa4c7abaeaa2f5d7ee50cf6f8c92d3cbd8db45bc163dddac7c3bbc1e487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:33 crc kubenswrapper[4811]: E0216 21:05:33.733940 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4z4hh_openshift-operators(bba265f5-85c6-4130-a470-839286f95d5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4z4hh_openshift-operators(bba265f5-85c6-4130-a470-839286f95d5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4z4hh_openshift-operators_bba265f5-85c6-4130-a470-839286f95d5b_0(55deefa4c7abaeaa2f5d7ee50cf6f8c92d3cbd8db45bc163dddac7c3bbc1e487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" podUID="bba265f5-85c6-4130-a470-839286f95d5b" Feb 16 21:05:34 crc kubenswrapper[4811]: I0216 21:05:34.702829 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:34 crc kubenswrapper[4811]: I0216 21:05:34.703442 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:34 crc kubenswrapper[4811]: E0216 21:05:34.746325 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(5a053b1e86a02c36143d093a47d30bc77ea7e8c03d047152033b125a4e439595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:34 crc kubenswrapper[4811]: E0216 21:05:34.746376 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(5a053b1e86a02c36143d093a47d30bc77ea7e8c03d047152033b125a4e439595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:34 crc kubenswrapper[4811]: E0216 21:05:34.746398 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(5a053b1e86a02c36143d093a47d30bc77ea7e8c03d047152033b125a4e439595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:34 crc kubenswrapper[4811]: E0216 21:05:34.746598 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8r4zv_openshift-operators(fa4598ee-dd6b-40e2-a925-71d9e3e6c17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8r4zv_openshift-operators(fa4598ee-dd6b-40e2-a925-71d9e3e6c17a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8r4zv_openshift-operators_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a_0(5a053b1e86a02c36143d093a47d30bc77ea7e8c03d047152033b125a4e439595): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" podUID="fa4598ee-dd6b-40e2-a925-71d9e3e6c17a" Feb 16 21:05:35 crc kubenswrapper[4811]: I0216 21:05:35.701919 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:35 crc kubenswrapper[4811]: I0216 21:05:35.702308 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:35 crc kubenswrapper[4811]: E0216 21:05:35.730906 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(dcf747882dd8e8bc6920822e99bc61738c6bcc05a5605590276d1ac6fe0ff336): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:35 crc kubenswrapper[4811]: E0216 21:05:35.730995 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(dcf747882dd8e8bc6920822e99bc61738c6bcc05a5605590276d1ac6fe0ff336): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:35 crc kubenswrapper[4811]: E0216 21:05:35.731034 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(dcf747882dd8e8bc6920822e99bc61738c6bcc05a5605590276d1ac6fe0ff336): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:35 crc kubenswrapper[4811]: E0216 21:05:35.731111 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators(42b00ab7-c05d-40bc-a605-10d2bc710ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators(42b00ab7-c05d-40bc-a605-10d2bc710ec5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_openshift-operators_42b00ab7-c05d-40bc-a605-10d2bc710ec5_0(dcf747882dd8e8bc6920822e99bc61738c6bcc05a5605590276d1ac6fe0ff336): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" podUID="42b00ab7-c05d-40bc-a605-10d2bc710ec5" Feb 16 21:05:36 crc kubenswrapper[4811]: I0216 21:05:36.702176 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:36 crc kubenswrapper[4811]: I0216 21:05:36.702188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:36 crc kubenswrapper[4811]: I0216 21:05:36.702814 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:36 crc kubenswrapper[4811]: I0216 21:05:36.702881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.743934 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(1e96bf860b6907e43793d51ea429b3c23ddd3e186a307c0ca26c2420bcd6462e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.744016 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(1e96bf860b6907e43793d51ea429b3c23ddd3e186a307c0ca26c2420bcd6462e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.744047 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(1e96bf860b6907e43793d51ea429b3c23ddd3e186a307c0ca26c2420bcd6462e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.744103 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators(dc3ef150-066b-4fd0-bff2-4606e25694e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators(dc3ef150-066b-4fd0-bff2-4606e25694e4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-88jqj_openshift-operators_dc3ef150-066b-4fd0-bff2-4606e25694e4_0(1e96bf860b6907e43793d51ea429b3c23ddd3e186a307c0ca26c2420bcd6462e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" podUID="dc3ef150-066b-4fd0-bff2-4606e25694e4" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.749526 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2d8bd95b7236e5752451c36a5dd7e7a144e300fad7bbf4cf6bd73c4c9f1c512e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.749606 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2d8bd95b7236e5752451c36a5dd7e7a144e300fad7bbf4cf6bd73c4c9f1c512e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.749659 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2d8bd95b7236e5752451c36a5dd7e7a144e300fad7bbf4cf6bd73c4c9f1c512e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:36 crc kubenswrapper[4811]: E0216 21:05:36.749710 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators(dbbd192e-9df8-40ec-9397-f9eebf6b9111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators(dbbd192e-9df8-40ec-9397-f9eebf6b9111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_openshift-operators_dbbd192e-9df8-40ec-9397-f9eebf6b9111_0(2d8bd95b7236e5752451c36a5dd7e7a144e300fad7bbf4cf6bd73c4c9f1c512e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" podUID="dbbd192e-9df8-40ec-9397-f9eebf6b9111" Feb 16 21:05:42 crc kubenswrapper[4811]: I0216 21:05:42.708169 4811 scope.go:117] "RemoveContainer" containerID="bf50f864995f5e7737f081953d628014fddf69c787e71973d21b61c272b0a372" Feb 16 21:05:43 crc kubenswrapper[4811]: I0216 21:05:43.653097 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mgctp_a946fefd-e014-48b1-995b-ef221a88bc73/kube-multus/2.log" Feb 16 21:05:43 crc kubenswrapper[4811]: I0216 21:05:43.653783 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mgctp" event={"ID":"a946fefd-e014-48b1-995b-ef221a88bc73","Type":"ContainerStarted","Data":"caa7d1394b5e90272417d2f3fd14f9e3156f7cd72da67e8d19c10b83a26c5898"} Feb 16 21:05:45 crc kubenswrapper[4811]: I0216 21:05:45.349534 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lhtp5" Feb 16 21:05:45 crc kubenswrapper[4811]: I0216 21:05:45.702877 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:45 crc kubenswrapper[4811]: I0216 21:05:45.703546 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:45 crc kubenswrapper[4811]: I0216 21:05:45.895751 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4z4hh"] Feb 16 21:05:46 crc kubenswrapper[4811]: I0216 21:05:46.671443 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" event={"ID":"bba265f5-85c6-4130-a470-839286f95d5b","Type":"ContainerStarted","Data":"7b3717d9868ff0835c32b1feb03c6d1c1041ea4ed5e24d1cb588eeeceb4992af"} Feb 16 21:05:47 crc kubenswrapper[4811]: I0216 21:05:47.702106 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:47 crc kubenswrapper[4811]: I0216 21:05:47.703317 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.008962 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4"] Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.363951 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.364031 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.364094 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.364866 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f0a256388bab5ae3a75d81440eaebf36f0fd6fc190dadf86a4b8d117b1e9e11"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.364964 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://1f0a256388bab5ae3a75d81440eaebf36f0fd6fc190dadf86a4b8d117b1e9e11" gracePeriod=600 Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.684178 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" event={"ID":"42b00ab7-c05d-40bc-a605-10d2bc710ec5","Type":"ContainerStarted","Data":"fe27e015d31f411d51591506d609ebc8a27bc72fc6c6f30411ab0d1fa80774d5"} Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.702777 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.702778 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.703623 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:48 crc kubenswrapper[4811]: I0216 21:05:48.703623 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.005971 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj"] Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.270396 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8r4zv"] Feb 16 21:05:49 crc kubenswrapper[4811]: W0216 21:05:49.275123 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4598ee_dd6b_40e2_a925_71d9e3e6c17a.slice/crio-4b8d5478949b9020df26bd2c0f3046414658fc31fb558aeb450fa6e4c4663a14 WatchSource:0}: Error finding container 4b8d5478949b9020df26bd2c0f3046414658fc31fb558aeb450fa6e4c4663a14: Status 404 returned error can't find the container with id 4b8d5478949b9020df26bd2c0f3046414658fc31fb558aeb450fa6e4c4663a14 Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.694544 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" event={"ID":"dc3ef150-066b-4fd0-bff2-4606e25694e4","Type":"ContainerStarted","Data":"3eb5cf7d80b74671b54d7d619ba5a1a984ea2506ef37a9ff6ea1862ea4d3cc72"} Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.695976 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" event={"ID":"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a","Type":"ContainerStarted","Data":"4b8d5478949b9020df26bd2c0f3046414658fc31fb558aeb450fa6e4c4663a14"} Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.700068 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="1f0a256388bab5ae3a75d81440eaebf36f0fd6fc190dadf86a4b8d117b1e9e11" exitCode=0 Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.700109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"1f0a256388bab5ae3a75d81440eaebf36f0fd6fc190dadf86a4b8d117b1e9e11"} Feb 16 21:05:49 crc kubenswrapper[4811]: I0216 21:05:49.700160 4811 scope.go:117] "RemoveContainer" containerID="511f95f6a6799c704fdd7e32c1371b422a6e981f14147fd4c29d440cdf6c2331" Feb 16 21:05:50 crc kubenswrapper[4811]: I0216 21:05:50.710058 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"15b3c1409544ddca121710199668aff9f31624230e68744253cb5ac3f7bbbf00"} Feb 16 21:05:51 crc kubenswrapper[4811]: I0216 21:05:51.702735 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:51 crc kubenswrapper[4811]: I0216 21:05:51.703308 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.016350 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs"] Feb 16 21:05:57 crc kubenswrapper[4811]: W0216 21:05:57.064950 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbbd192e_9df8_40ec_9397_f9eebf6b9111.slice/crio-c2279c7449bc5e328f00054dcd00722adcb63455d5adaec955748bbb59668a66 WatchSource:0}: Error finding container c2279c7449bc5e328f00054dcd00722adcb63455d5adaec955748bbb59668a66: Status 404 returned error can't find the container with id c2279c7449bc5e328f00054dcd00722adcb63455d5adaec955748bbb59668a66 Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.777961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" event={"ID":"bba265f5-85c6-4130-a470-839286f95d5b","Type":"ContainerStarted","Data":"8e1ea9ce6d8a47c4e554d50b312aebfc7338f11066aecfbb511778899afdd411"} Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.778518 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.780990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" event={"ID":"dc3ef150-066b-4fd0-bff2-4606e25694e4","Type":"ContainerStarted","Data":"8b0014c17666f023534b8f605ea174aeae36f26997adad8990ba64c9146c8ff3"} Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.783762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" event={"ID":"dbbd192e-9df8-40ec-9397-f9eebf6b9111","Type":"ContainerStarted","Data":"5598b39aec97b8b88cde8919d8d5178d5289fe9e79fbd3172c3a796221915850"} Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.783796 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" event={"ID":"dbbd192e-9df8-40ec-9397-f9eebf6b9111","Type":"ContainerStarted","Data":"c2279c7449bc5e328f00054dcd00722adcb63455d5adaec955748bbb59668a66"} Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.787728 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" event={"ID":"42b00ab7-c05d-40bc-a605-10d2bc710ec5","Type":"ContainerStarted","Data":"dec1b449e9731e3b7ad3ef971f5a2c45d0d83b54851d5880c51ce93ab47c4430"} Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.790025 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" event={"ID":"fa4598ee-dd6b-40e2-a925-71d9e3e6c17a","Type":"ContainerStarted","Data":"997a305ea56c1190ee36694417e45fec1ff0b92bc0e1d1e8c8ec376b4d8ccc2b"} Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.790330 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.820518 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" podStartSLOduration=26.105323154 podStartE2EDuration="36.820487925s" podCreationTimestamp="2026-02-16 21:05:21 +0000 UTC" firstStartedPulling="2026-02-16 21:05:45.917841872 +0000 UTC m=+563.847137810" lastFinishedPulling="2026-02-16 21:05:56.633006643 +0000 UTC m=+574.562302581" observedRunningTime="2026-02-16 21:05:57.814516756 +0000 UTC m=+575.743812724" watchObservedRunningTime="2026-02-16 21:05:57.820487925 +0000 UTC m=+575.749783883" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.843054 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" podStartSLOduration=29.470206425 podStartE2EDuration="36.84303822s" podCreationTimestamp="2026-02-16 21:05:21 +0000 UTC" firstStartedPulling="2026-02-16 21:05:49.278556936 +0000 UTC m=+567.207852864" lastFinishedPulling="2026-02-16 21:05:56.651388731 +0000 UTC m=+574.580684659" observedRunningTime="2026-02-16 21:05:57.840803418 +0000 UTC m=+575.770099376" watchObservedRunningTime="2026-02-16 21:05:57.84303822 +0000 UTC m=+575.772334158" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.859647 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4z4hh" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.905964 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-88jqj" podStartSLOduration=29.275830512 podStartE2EDuration="36.905930464s" podCreationTimestamp="2026-02-16 21:05:21 +0000 UTC" firstStartedPulling="2026-02-16 21:05:49.021834042 +0000 UTC m=+566.951129980" lastFinishedPulling="2026-02-16 21:05:56.651933994 +0000 UTC m=+574.581229932" observedRunningTime="2026-02-16 21:05:57.862178166 +0000 UTC m=+575.791474144" watchObservedRunningTime="2026-02-16 21:05:57.905930464 +0000 UTC m=+575.835226442" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.908044 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-kc8n4" podStartSLOduration=28.296635356 podStartE2EDuration="36.908030833s" podCreationTimestamp="2026-02-16 21:05:21 +0000 UTC" firstStartedPulling="2026-02-16 21:05:48.021591706 +0000 UTC m=+565.950887644" lastFinishedPulling="2026-02-16 21:05:56.632987183 +0000 UTC m=+574.562283121" observedRunningTime="2026-02-16 21:05:57.901506721 +0000 UTC m=+575.830802699" watchObservedRunningTime="2026-02-16 21:05:57.908030833 +0000 UTC m=+575.837326811" Feb 16 21:05:57 crc kubenswrapper[4811]: I0216 21:05:57.947027 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-849f874f67-zr8hs" podStartSLOduration=36.946995049 podStartE2EDuration="36.946995049s" podCreationTimestamp="2026-02-16 21:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:05:57.945015143 +0000 UTC m=+575.874311121" watchObservedRunningTime="2026-02-16 21:05:57.946995049 +0000 UTC m=+575.876290987" Feb 16 21:06:02 crc kubenswrapper[4811]: I0216 21:06:02.231169 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8r4zv" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.752808 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v"] Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.758592 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.774536 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vb859"] Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.775534 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vb859" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.778164 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.778428 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.778579 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tfnh4" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.778699 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v"] Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.786533 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-j24hc" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.790724 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vb859"] Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.800946 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gvrsm"] Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.801829 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.810466 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4m6x9" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.834711 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gvrsm"] Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.939312 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682mj\" (UniqueName: \"kubernetes.io/projected/eae3ad4d-9c2f-42b0-aba5-349aee77959c-kube-api-access-682mj\") pod \"cert-manager-858654f9db-vb859\" (UID: \"eae3ad4d-9c2f-42b0-aba5-349aee77959c\") " pod="cert-manager/cert-manager-858654f9db-vb859" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.939412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9442\" (UniqueName: \"kubernetes.io/projected/5f8737f9-432a-4461-b9ae-990b294ad123-kube-api-access-j9442\") pod \"cert-manager-webhook-687f57d79b-gvrsm\" (UID: \"5f8737f9-432a-4461-b9ae-990b294ad123\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:07 crc kubenswrapper[4811]: I0216 21:06:07.939483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29fp\" (UniqueName: \"kubernetes.io/projected/3a9cec30-249b-4b05-a7d3-1722bf778309-kube-api-access-v29fp\") pod \"cert-manager-cainjector-cf98fcc89-cdx5v\" (UID: \"3a9cec30-249b-4b05-a7d3-1722bf778309\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.040795 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29fp\" (UniqueName: \"kubernetes.io/projected/3a9cec30-249b-4b05-a7d3-1722bf778309-kube-api-access-v29fp\") pod \"cert-manager-cainjector-cf98fcc89-cdx5v\" (UID: \"3a9cec30-249b-4b05-a7d3-1722bf778309\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.040859 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-682mj\" (UniqueName: \"kubernetes.io/projected/eae3ad4d-9c2f-42b0-aba5-349aee77959c-kube-api-access-682mj\") pod \"cert-manager-858654f9db-vb859\" (UID: \"eae3ad4d-9c2f-42b0-aba5-349aee77959c\") " pod="cert-manager/cert-manager-858654f9db-vb859" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.040887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9442\" (UniqueName: \"kubernetes.io/projected/5f8737f9-432a-4461-b9ae-990b294ad123-kube-api-access-j9442\") pod \"cert-manager-webhook-687f57d79b-gvrsm\" (UID: \"5f8737f9-432a-4461-b9ae-990b294ad123\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.061804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29fp\" (UniqueName: \"kubernetes.io/projected/3a9cec30-249b-4b05-a7d3-1722bf778309-kube-api-access-v29fp\") pod \"cert-manager-cainjector-cf98fcc89-cdx5v\" (UID: \"3a9cec30-249b-4b05-a7d3-1722bf778309\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.061808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-682mj\" (UniqueName: \"kubernetes.io/projected/eae3ad4d-9c2f-42b0-aba5-349aee77959c-kube-api-access-682mj\") pod \"cert-manager-858654f9db-vb859\" (UID: \"eae3ad4d-9c2f-42b0-aba5-349aee77959c\") " pod="cert-manager/cert-manager-858654f9db-vb859" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.061968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9442\" (UniqueName: \"kubernetes.io/projected/5f8737f9-432a-4461-b9ae-990b294ad123-kube-api-access-j9442\") pod \"cert-manager-webhook-687f57d79b-gvrsm\" (UID: \"5f8737f9-432a-4461-b9ae-990b294ad123\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.099528 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.118058 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vb859" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.136600 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.397907 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gvrsm"] Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.542969 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v"] Feb 16 21:06:08 crc kubenswrapper[4811]: W0216 21:06:08.549890 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a9cec30_249b_4b05_a7d3_1722bf778309.slice/crio-b041a9916aeb32adde3ef3ecefb401201e79ea501e81541e6ae0216a33e6f79e WatchSource:0}: Error finding container b041a9916aeb32adde3ef3ecefb401201e79ea501e81541e6ae0216a33e6f79e: Status 404 returned error can't find the container with id b041a9916aeb32adde3ef3ecefb401201e79ea501e81541e6ae0216a33e6f79e Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.559365 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vb859"] Feb 16 21:06:08 crc kubenswrapper[4811]: W0216 21:06:08.567465 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeae3ad4d_9c2f_42b0_aba5_349aee77959c.slice/crio-8b84d9cf3cc181baccd10141fcb4da83021e46a68b96ba1611005c30e710efee WatchSource:0}: Error finding container 8b84d9cf3cc181baccd10141fcb4da83021e46a68b96ba1611005c30e710efee: Status 404 returned error can't find the container with id 8b84d9cf3cc181baccd10141fcb4da83021e46a68b96ba1611005c30e710efee Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.865551 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" event={"ID":"3a9cec30-249b-4b05-a7d3-1722bf778309","Type":"ContainerStarted","Data":"b041a9916aeb32adde3ef3ecefb401201e79ea501e81541e6ae0216a33e6f79e"} Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.869742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" event={"ID":"5f8737f9-432a-4461-b9ae-990b294ad123","Type":"ContainerStarted","Data":"d123f11fa34bf0e7e1c605900774190b34f68cc96aca04ed749cc6b0ca743ef6"} Feb 16 21:06:08 crc kubenswrapper[4811]: I0216 21:06:08.870965 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vb859" event={"ID":"eae3ad4d-9c2f-42b0-aba5-349aee77959c","Type":"ContainerStarted","Data":"8b84d9cf3cc181baccd10141fcb4da83021e46a68b96ba1611005c30e710efee"} Feb 16 21:06:12 crc kubenswrapper[4811]: I0216 21:06:12.902323 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" event={"ID":"3a9cec30-249b-4b05-a7d3-1722bf778309","Type":"ContainerStarted","Data":"d3a01b59ebb6b8b40d499a66bbc004e0103032cce48e4e6d68d59452cf74e15a"} Feb 16 21:06:12 crc kubenswrapper[4811]: I0216 21:06:12.904674 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" event={"ID":"5f8737f9-432a-4461-b9ae-990b294ad123","Type":"ContainerStarted","Data":"25703a5feca794a4eeaff072eaa64adbc5dbd8b00c8446607a8295f4ce12f442"} Feb 16 21:06:12 crc kubenswrapper[4811]: I0216 21:06:12.905272 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:12 crc kubenswrapper[4811]: I0216 21:06:12.924154 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdx5v" podStartSLOduration=2.774515274 podStartE2EDuration="5.924135255s" podCreationTimestamp="2026-02-16 21:06:07 +0000 UTC" firstStartedPulling="2026-02-16 21:06:08.552131959 +0000 UTC m=+586.481427887" lastFinishedPulling="2026-02-16 21:06:11.7017519 +0000 UTC m=+589.631047868" observedRunningTime="2026-02-16 21:06:12.91971662 +0000 UTC m=+590.849012558" watchObservedRunningTime="2026-02-16 21:06:12.924135255 +0000 UTC m=+590.853431193" Feb 16 21:06:12 crc kubenswrapper[4811]: I0216 21:06:12.963811 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" podStartSLOduration=2.582256191 podStartE2EDuration="5.963792223s" podCreationTimestamp="2026-02-16 21:06:07 +0000 UTC" firstStartedPulling="2026-02-16 21:06:08.40316498 +0000 UTC m=+586.332460918" lastFinishedPulling="2026-02-16 21:06:11.784701012 +0000 UTC m=+589.713996950" observedRunningTime="2026-02-16 21:06:12.958850145 +0000 UTC m=+590.888146083" watchObservedRunningTime="2026-02-16 21:06:12.963792223 +0000 UTC m=+590.893088161" Feb 16 21:06:13 crc kubenswrapper[4811]: I0216 21:06:13.912111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vb859" event={"ID":"eae3ad4d-9c2f-42b0-aba5-349aee77959c","Type":"ContainerStarted","Data":"1ca1b76ab5687f728400152ecf38d7a156abf005bf105706e45e1f8e425ded2d"} Feb 16 21:06:13 crc kubenswrapper[4811]: I0216 21:06:13.933157 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vb859" podStartSLOduration=2.220123069 podStartE2EDuration="6.933130383s" podCreationTimestamp="2026-02-16 21:06:07 +0000 UTC" firstStartedPulling="2026-02-16 21:06:08.570403565 +0000 UTC m=+586.499699503" lastFinishedPulling="2026-02-16 21:06:13.283410879 +0000 UTC m=+591.212706817" observedRunningTime="2026-02-16 21:06:13.932535229 +0000 UTC m=+591.861831227" watchObservedRunningTime="2026-02-16 21:06:13.933130383 +0000 UTC m=+591.862426331" Feb 16 21:06:18 crc kubenswrapper[4811]: I0216 21:06:18.144616 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-gvrsm" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.008961 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5"] Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.010435 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.015693 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.022734 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5"] Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.073888 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.073957 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.074036 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm75f\" (UniqueName: \"kubernetes.io/projected/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-kube-api-access-xm75f\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.175036 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm75f\" (UniqueName: \"kubernetes.io/projected/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-kube-api-access-xm75f\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.175098 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.175138 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.175656 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.175773 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.200020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm75f\" (UniqueName: \"kubernetes.io/projected/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-kube-api-access-xm75f\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.327783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:45 crc kubenswrapper[4811]: I0216 21:06:45.616081 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5"] Feb 16 21:06:45 crc kubenswrapper[4811]: W0216 21:06:45.622965 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a06619_ff67_4a17_b2fa_b3e9f6f45345.slice/crio-9dae3b817bcb6961d9a56765ac578bd0bdf8c2752d7a5e63af3034ceccb017b2 WatchSource:0}: Error finding container 9dae3b817bcb6961d9a56765ac578bd0bdf8c2752d7a5e63af3034ceccb017b2: Status 404 returned error can't find the container with id 9dae3b817bcb6961d9a56765ac578bd0bdf8c2752d7a5e63af3034ceccb017b2 Feb 16 21:06:46 crc kubenswrapper[4811]: I0216 21:06:46.161910 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerID="5c1d7745b7474c34df658862e02d5d8f8e31e3633c896c6f884647015b7176a5" exitCode=0 Feb 16 21:06:46 crc kubenswrapper[4811]: I0216 21:06:46.161967 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" event={"ID":"d2a06619-ff67-4a17-b2fa-b3e9f6f45345","Type":"ContainerDied","Data":"5c1d7745b7474c34df658862e02d5d8f8e31e3633c896c6f884647015b7176a5"} Feb 16 21:06:46 crc kubenswrapper[4811]: I0216 21:06:46.162258 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" event={"ID":"d2a06619-ff67-4a17-b2fa-b3e9f6f45345","Type":"ContainerStarted","Data":"9dae3b817bcb6961d9a56765ac578bd0bdf8c2752d7a5e63af3034ceccb017b2"} Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.063284 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.064170 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.067052 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.067166 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.068237 4811 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-lngnw" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.080099 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.203894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhj94\" (UniqueName: \"kubernetes.io/projected/18a07ff3-a481-411f-bae0-b536b76f4e19-kube-api-access-nhj94\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") " pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.204037 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") " pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.305987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhj94\" (UniqueName: \"kubernetes.io/projected/18a07ff3-a481-411f-bae0-b536b76f4e19-kube-api-access-nhj94\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") " pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.306477 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") " pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.310733 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.310805 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c9e04cfa4f407d51809bf27f00fcc904f18607d0d10aed29397582ed354efb13/globalmount\"" pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.337544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhj94\" (UniqueName: \"kubernetes.io/projected/18a07ff3-a481-411f-bae0-b536b76f4e19-kube-api-access-nhj94\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") " pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.355329 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5259bfbd-7fe1-44db-8ad3-4ae4cd6b7362\") pod \"minio\" (UID: \"18a07ff3-a481-411f-bae0-b536b76f4e19\") " pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.379873 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 21:06:47 crc kubenswrapper[4811]: I0216 21:06:47.560415 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 21:06:47 crc kubenswrapper[4811]: W0216 21:06:47.620290 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18a07ff3_a481_411f_bae0_b536b76f4e19.slice/crio-ae004ad142a8f0e7fd00022cd3dc359d61acd84a907e8267b7dcccdb5aee9168 WatchSource:0}: Error finding container ae004ad142a8f0e7fd00022cd3dc359d61acd84a907e8267b7dcccdb5aee9168: Status 404 returned error can't find the container with id ae004ad142a8f0e7fd00022cd3dc359d61acd84a907e8267b7dcccdb5aee9168 Feb 16 21:06:48 crc kubenswrapper[4811]: I0216 21:06:48.177542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"18a07ff3-a481-411f-bae0-b536b76f4e19","Type":"ContainerStarted","Data":"ae004ad142a8f0e7fd00022cd3dc359d61acd84a907e8267b7dcccdb5aee9168"} Feb 16 21:06:48 crc kubenswrapper[4811]: I0216 21:06:48.181421 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerID="54eb6be0f57bb07e2139fa281076ed159adba42d857dabbc487f374b2583e26f" exitCode=0 Feb 16 21:06:48 crc kubenswrapper[4811]: I0216 21:06:48.181459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" event={"ID":"d2a06619-ff67-4a17-b2fa-b3e9f6f45345","Type":"ContainerDied","Data":"54eb6be0f57bb07e2139fa281076ed159adba42d857dabbc487f374b2583e26f"} Feb 16 21:06:49 crc kubenswrapper[4811]: I0216 21:06:49.194664 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerID="40a268446857997c691f31a78e7a0f71e0eff7713d1e1dd69e57929adfed7d88" exitCode=0 Feb 16 21:06:49 crc kubenswrapper[4811]: I0216 21:06:49.194788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" event={"ID":"d2a06619-ff67-4a17-b2fa-b3e9f6f45345","Type":"ContainerDied","Data":"40a268446857997c691f31a78e7a0f71e0eff7713d1e1dd69e57929adfed7d88"} Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.535576 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.663530 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-util\") pod \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.663621 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm75f\" (UniqueName: \"kubernetes.io/projected/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-kube-api-access-xm75f\") pod \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.663693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-bundle\") pod \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\" (UID: \"d2a06619-ff67-4a17-b2fa-b3e9f6f45345\") " Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.665261 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-bundle" (OuterVolumeSpecName: "bundle") pod "d2a06619-ff67-4a17-b2fa-b3e9f6f45345" (UID: "d2a06619-ff67-4a17-b2fa-b3e9f6f45345"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.675720 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-kube-api-access-xm75f" (OuterVolumeSpecName: "kube-api-access-xm75f") pod "d2a06619-ff67-4a17-b2fa-b3e9f6f45345" (UID: "d2a06619-ff67-4a17-b2fa-b3e9f6f45345"). InnerVolumeSpecName "kube-api-access-xm75f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.682836 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-util" (OuterVolumeSpecName: "util") pod "d2a06619-ff67-4a17-b2fa-b3e9f6f45345" (UID: "d2a06619-ff67-4a17-b2fa-b3e9f6f45345"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.765040 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.765083 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm75f\" (UniqueName: \"kubernetes.io/projected/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-kube-api-access-xm75f\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:50 crc kubenswrapper[4811]: I0216 21:06:50.765098 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2a06619-ff67-4a17-b2fa-b3e9f6f45345-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:51 crc kubenswrapper[4811]: I0216 21:06:51.214276 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" event={"ID":"d2a06619-ff67-4a17-b2fa-b3e9f6f45345","Type":"ContainerDied","Data":"9dae3b817bcb6961d9a56765ac578bd0bdf8c2752d7a5e63af3034ceccb017b2"} Feb 16 21:06:51 crc kubenswrapper[4811]: I0216 21:06:51.214572 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5" Feb 16 21:06:51 crc kubenswrapper[4811]: I0216 21:06:51.214579 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dae3b817bcb6961d9a56765ac578bd0bdf8c2752d7a5e63af3034ceccb017b2" Feb 16 21:06:52 crc kubenswrapper[4811]: I0216 21:06:52.221290 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"18a07ff3-a481-411f-bae0-b536b76f4e19","Type":"ContainerStarted","Data":"e0b6afee3cbba608ec5d187c44b0b8bf8f6b821077e83953c02827fa241b17c2"} Feb 16 21:06:52 crc kubenswrapper[4811]: I0216 21:06:52.240941 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.636758666 podStartE2EDuration="8.240920227s" podCreationTimestamp="2026-02-16 21:06:44 +0000 UTC" firstStartedPulling="2026-02-16 21:06:47.623788074 +0000 UTC m=+625.553084012" lastFinishedPulling="2026-02-16 21:06:51.227949595 +0000 UTC m=+629.157245573" observedRunningTime="2026-02-16 21:06:52.238871679 +0000 UTC m=+630.168167667" watchObservedRunningTime="2026-02-16 21:06:52.240920227 +0000 UTC m=+630.170216205" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.210318 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n"] Feb 16 21:06:57 crc kubenswrapper[4811]: E0216 21:06:57.211904 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="pull" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.211957 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="pull" Feb 16 21:06:57 crc kubenswrapper[4811]: E0216 21:06:57.211980 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="util" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.211992 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="util" Feb 16 21:06:57 crc kubenswrapper[4811]: E0216 21:06:57.212008 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="extract" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.212017 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="extract" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.212240 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a06619-ff67-4a17-b2fa-b3e9f6f45345" containerName="extract" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.213419 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.217407 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.217510 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-4mnn6" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.217804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.217917 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.218056 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.218239 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.229396 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n"] Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.354466 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwkxq\" (UniqueName: \"kubernetes.io/projected/d8eaf998-04df-433c-93e9-df5a9261330d-kube-api-access-hwkxq\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.354533 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.354559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8eaf998-04df-433c-93e9-df5a9261330d-manager-config\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.354593 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-webhook-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.354632 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-apiservice-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.456323 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwkxq\" (UniqueName: \"kubernetes.io/projected/d8eaf998-04df-433c-93e9-df5a9261330d-kube-api-access-hwkxq\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.456405 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.456436 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8eaf998-04df-433c-93e9-df5a9261330d-manager-config\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.456477 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-webhook-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.456507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-apiservice-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.458041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8eaf998-04df-433c-93e9-df5a9261330d-manager-config\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.463867 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-webhook-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.464014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.464501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8eaf998-04df-433c-93e9-df5a9261330d-apiservice-cert\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.481189 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwkxq\" (UniqueName: \"kubernetes.io/projected/d8eaf998-04df-433c-93e9-df5a9261330d-kube-api-access-hwkxq\") pod \"loki-operator-controller-manager-65947bdd9b-jmw6n\" (UID: \"d8eaf998-04df-433c-93e9-df5a9261330d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.538283 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:06:57 crc kubenswrapper[4811]: I0216 21:06:57.872846 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n"] Feb 16 21:06:58 crc kubenswrapper[4811]: I0216 21:06:58.260738 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" event={"ID":"d8eaf998-04df-433c-93e9-df5a9261330d","Type":"ContainerStarted","Data":"d03a64636c6b18701df93059ca2d273759dfa5feef367920ba3fb9b257789a0a"} Feb 16 21:07:03 crc kubenswrapper[4811]: I0216 21:07:03.302236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" event={"ID":"d8eaf998-04df-433c-93e9-df5a9261330d","Type":"ContainerStarted","Data":"6b4cc458a5f8db8f2f577b3645636e6aa5b35f31542dd437b0f4e78b386912a9"} Feb 16 21:07:09 crc kubenswrapper[4811]: I0216 21:07:09.342037 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" event={"ID":"d8eaf998-04df-433c-93e9-df5a9261330d","Type":"ContainerStarted","Data":"f0abca7f775c01fcef5a1a800180a3ff6a6f578917c6aa05e2c43cf1311d7903"} Feb 16 21:07:09 crc kubenswrapper[4811]: I0216 21:07:09.342606 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:07:09 crc kubenswrapper[4811]: I0216 21:07:09.347804 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" Feb 16 21:07:09 crc kubenswrapper[4811]: I0216 21:07:09.375888 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-65947bdd9b-jmw6n" podStartSLOduration=1.178148609 podStartE2EDuration="12.375858577s" podCreationTimestamp="2026-02-16 21:06:57 +0000 UTC" firstStartedPulling="2026-02-16 21:06:57.883814877 +0000 UTC m=+635.813110825" lastFinishedPulling="2026-02-16 21:07:09.081524855 +0000 UTC m=+647.010820793" observedRunningTime="2026-02-16 21:07:09.3680137 +0000 UTC m=+647.297309648" watchObservedRunningTime="2026-02-16 21:07:09.375858577 +0000 UTC m=+647.305154545" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.451734 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85"] Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.453279 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.457339 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.468617 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85"] Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.547472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.547524 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zc5\" (UniqueName: \"kubernetes.io/projected/fbfe090c-7d12-4a08-ab12-8ee916f0741f-kube-api-access-z6zc5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.547554 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.649401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.649470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zc5\" (UniqueName: \"kubernetes.io/projected/fbfe090c-7d12-4a08-ab12-8ee916f0741f-kube-api-access-z6zc5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.649522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.650025 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.650099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.685424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zc5\" (UniqueName: \"kubernetes.io/projected/fbfe090c-7d12-4a08-ab12-8ee916f0741f-kube-api-access-z6zc5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.772512 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:43 crc kubenswrapper[4811]: I0216 21:07:43.993660 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85"] Feb 16 21:07:44 crc kubenswrapper[4811]: I0216 21:07:44.738116 4811 generic.go:334] "Generic (PLEG): container finished" podID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerID="6e99ebab5ed13a201f402f3e0e4919163d68c75b67c27f3bf484fd00ef085f6e" exitCode=0 Feb 16 21:07:44 crc kubenswrapper[4811]: I0216 21:07:44.738230 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" event={"ID":"fbfe090c-7d12-4a08-ab12-8ee916f0741f","Type":"ContainerDied","Data":"6e99ebab5ed13a201f402f3e0e4919163d68c75b67c27f3bf484fd00ef085f6e"} Feb 16 21:07:44 crc kubenswrapper[4811]: I0216 21:07:44.738515 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" event={"ID":"fbfe090c-7d12-4a08-ab12-8ee916f0741f","Type":"ContainerStarted","Data":"b39d10daf04a472ad9e7d0bf63df19488bb6d7283f7a27599a6844b544563578"} Feb 16 21:07:46 crc kubenswrapper[4811]: I0216 21:07:46.756446 4811 generic.go:334] "Generic (PLEG): container finished" podID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerID="e729182bc2120c8ef93838fa8c78b00e957616f889f3b0059c9b645a24c54aea" exitCode=0 Feb 16 21:07:46 crc kubenswrapper[4811]: I0216 21:07:46.756529 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" event={"ID":"fbfe090c-7d12-4a08-ab12-8ee916f0741f","Type":"ContainerDied","Data":"e729182bc2120c8ef93838fa8c78b00e957616f889f3b0059c9b645a24c54aea"} Feb 16 21:07:47 crc kubenswrapper[4811]: I0216 21:07:47.767072 4811 generic.go:334] "Generic (PLEG): container finished" podID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerID="ae094c00f32d9b96f77a212f18264e3a00d780d2f708a4534f1f70f82f290c55" exitCode=0 Feb 16 21:07:47 crc kubenswrapper[4811]: I0216 21:07:47.767148 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" event={"ID":"fbfe090c-7d12-4a08-ab12-8ee916f0741f","Type":"ContainerDied","Data":"ae094c00f32d9b96f77a212f18264e3a00d780d2f708a4534f1f70f82f290c55"} Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.082133 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.142024 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-util\") pod \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.142110 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6zc5\" (UniqueName: \"kubernetes.io/projected/fbfe090c-7d12-4a08-ab12-8ee916f0741f-kube-api-access-z6zc5\") pod \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.142221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-bundle\") pod \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\" (UID: \"fbfe090c-7d12-4a08-ab12-8ee916f0741f\") " Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.143041 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-bundle" (OuterVolumeSpecName: "bundle") pod "fbfe090c-7d12-4a08-ab12-8ee916f0741f" (UID: "fbfe090c-7d12-4a08-ab12-8ee916f0741f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.151372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfe090c-7d12-4a08-ab12-8ee916f0741f-kube-api-access-z6zc5" (OuterVolumeSpecName: "kube-api-access-z6zc5") pod "fbfe090c-7d12-4a08-ab12-8ee916f0741f" (UID: "fbfe090c-7d12-4a08-ab12-8ee916f0741f"). InnerVolumeSpecName "kube-api-access-z6zc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.160745 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-util" (OuterVolumeSpecName: "util") pod "fbfe090c-7d12-4a08-ab12-8ee916f0741f" (UID: "fbfe090c-7d12-4a08-ab12-8ee916f0741f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.244398 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.244467 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbfe090c-7d12-4a08-ab12-8ee916f0741f-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.244500 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6zc5\" (UniqueName: \"kubernetes.io/projected/fbfe090c-7d12-4a08-ab12-8ee916f0741f-kube-api-access-z6zc5\") on node \"crc\" DevicePath \"\"" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.792804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" event={"ID":"fbfe090c-7d12-4a08-ab12-8ee916f0741f","Type":"ContainerDied","Data":"b39d10daf04a472ad9e7d0bf63df19488bb6d7283f7a27599a6844b544563578"} Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.792866 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39d10daf04a472ad9e7d0bf63df19488bb6d7283f7a27599a6844b544563578" Feb 16 21:07:49 crc kubenswrapper[4811]: I0216 21:07:49.792930 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.271397 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-jdk8r"] Feb 16 21:07:55 crc kubenswrapper[4811]: E0216 21:07:55.272066 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="util" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.272082 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="util" Feb 16 21:07:55 crc kubenswrapper[4811]: E0216 21:07:55.272098 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="pull" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.272106 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="pull" Feb 16 21:07:55 crc kubenswrapper[4811]: E0216 21:07:55.272127 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="extract" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.272164 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="extract" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.272332 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfe090c-7d12-4a08-ab12-8ee916f0741f" containerName="extract" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.272826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.278920 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.278978 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cct9z" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.279071 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.297118 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-jdk8r"] Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.342560 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zc64\" (UniqueName: \"kubernetes.io/projected/97cb0194-bca8-4074-bf79-c7827cdd12a4-kube-api-access-2zc64\") pod \"nmstate-operator-694c9596b7-jdk8r\" (UID: \"97cb0194-bca8-4074-bf79-c7827cdd12a4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.443488 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zc64\" (UniqueName: \"kubernetes.io/projected/97cb0194-bca8-4074-bf79-c7827cdd12a4-kube-api-access-2zc64\") pod \"nmstate-operator-694c9596b7-jdk8r\" (UID: \"97cb0194-bca8-4074-bf79-c7827cdd12a4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.470242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zc64\" (UniqueName: \"kubernetes.io/projected/97cb0194-bca8-4074-bf79-c7827cdd12a4-kube-api-access-2zc64\") pod \"nmstate-operator-694c9596b7-jdk8r\" (UID: \"97cb0194-bca8-4074-bf79-c7827cdd12a4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.597629 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" Feb 16 21:07:55 crc kubenswrapper[4811]: I0216 21:07:55.974454 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-jdk8r"] Feb 16 21:07:56 crc kubenswrapper[4811]: I0216 21:07:56.851249 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" event={"ID":"97cb0194-bca8-4074-bf79-c7827cdd12a4","Type":"ContainerStarted","Data":"0eaddd132d84df276b8c50f0c983b32ffdb2b70b6fa88b080e9449aa7c50c760"} Feb 16 21:07:58 crc kubenswrapper[4811]: I0216 21:07:58.869234 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" event={"ID":"97cb0194-bca8-4074-bf79-c7827cdd12a4","Type":"ContainerStarted","Data":"34cd4d510102d78950d527dda6668db0e0d0752da17126a3099973104cdef6e4"} Feb 16 21:07:58 crc kubenswrapper[4811]: I0216 21:07:58.902778 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-jdk8r" podStartSLOduration=1.926007493 podStartE2EDuration="3.902751754s" podCreationTimestamp="2026-02-16 21:07:55 +0000 UTC" firstStartedPulling="2026-02-16 21:07:55.994326485 +0000 UTC m=+693.923622433" lastFinishedPulling="2026-02-16 21:07:57.971070756 +0000 UTC m=+695.900366694" observedRunningTime="2026-02-16 21:07:58.895294283 +0000 UTC m=+696.824590221" watchObservedRunningTime="2026-02-16 21:07:58.902751754 +0000 UTC m=+696.832047732" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.654174 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.656234 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.659432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tqgsz" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.676768 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.677904 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.682313 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.685815 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.692140 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.702021 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v68rf"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.703066 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768021 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6655fbcb-d36f-45c8-a8b9-233070bddb6e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-dfn87\" (UID: \"6655fbcb-d36f-45c8-a8b9-233070bddb6e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768108 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-ovs-socket\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqsd6\" (UniqueName: \"kubernetes.io/projected/6655fbcb-d36f-45c8-a8b9-233070bddb6e-kube-api-access-tqsd6\") pod \"nmstate-webhook-866bcb46dc-dfn87\" (UID: \"6655fbcb-d36f-45c8-a8b9-233070bddb6e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768165 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfg98\" (UniqueName: \"kubernetes.io/projected/11976713-8674-4ea3-829a-b5ce035052bb-kube-api-access-wfg98\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768185 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-dbus-socket\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768221 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48r7\" (UniqueName: \"kubernetes.io/projected/4f24dbb8-fb2e-4076-a050-2fbcdbbceefd-kube-api-access-q48r7\") pod \"nmstate-metrics-58c85c668d-9jlhr\" (UID: \"4f24dbb8-fb2e-4076-a050-2fbcdbbceefd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.768243 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-nmstate-lock\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.799767 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.800602 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.802693 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.803576 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.805506 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5tkmf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.814157 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/57c3d8d6-3964-4cdd-ad7b-270a01966704-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6655fbcb-d36f-45c8-a8b9-233070bddb6e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-dfn87\" (UID: \"6655fbcb-d36f-45c8-a8b9-233070bddb6e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-ovs-socket\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpp5c\" (UniqueName: \"kubernetes.io/projected/57c3d8d6-3964-4cdd-ad7b-270a01966704-kube-api-access-bpp5c\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/57c3d8d6-3964-4cdd-ad7b-270a01966704-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqsd6\" (UniqueName: \"kubernetes.io/projected/6655fbcb-d36f-45c8-a8b9-233070bddb6e-kube-api-access-tqsd6\") pod \"nmstate-webhook-866bcb46dc-dfn87\" (UID: \"6655fbcb-d36f-45c8-a8b9-233070bddb6e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869538 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfg98\" (UniqueName: \"kubernetes.io/projected/11976713-8674-4ea3-829a-b5ce035052bb-kube-api-access-wfg98\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-dbus-socket\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869576 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q48r7\" (UniqueName: \"kubernetes.io/projected/4f24dbb8-fb2e-4076-a050-2fbcdbbceefd-kube-api-access-q48r7\") pod \"nmstate-metrics-58c85c668d-9jlhr\" (UID: \"4f24dbb8-fb2e-4076-a050-2fbcdbbceefd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-nmstate-lock\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.869662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-nmstate-lock\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.870367 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-ovs-socket\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.870585 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11976713-8674-4ea3-829a-b5ce035052bb-dbus-socket\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.883374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6655fbcb-d36f-45c8-a8b9-233070bddb6e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-dfn87\" (UID: \"6655fbcb-d36f-45c8-a8b9-233070bddb6e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.885504 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfg98\" (UniqueName: \"kubernetes.io/projected/11976713-8674-4ea3-829a-b5ce035052bb-kube-api-access-wfg98\") pod \"nmstate-handler-v68rf\" (UID: \"11976713-8674-4ea3-829a-b5ce035052bb\") " pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.897277 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqsd6\" (UniqueName: \"kubernetes.io/projected/6655fbcb-d36f-45c8-a8b9-233070bddb6e-kube-api-access-tqsd6\") pod \"nmstate-webhook-866bcb46dc-dfn87\" (UID: \"6655fbcb-d36f-45c8-a8b9-233070bddb6e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.897776 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q48r7\" (UniqueName: \"kubernetes.io/projected/4f24dbb8-fb2e-4076-a050-2fbcdbbceefd-kube-api-access-q48r7\") pod \"nmstate-metrics-58c85c668d-9jlhr\" (UID: \"4f24dbb8-fb2e-4076-a050-2fbcdbbceefd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.970645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpp5c\" (UniqueName: \"kubernetes.io/projected/57c3d8d6-3964-4cdd-ad7b-270a01966704-kube-api-access-bpp5c\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.970712 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/57c3d8d6-3964-4cdd-ad7b-270a01966704-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.970769 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/57c3d8d6-3964-4cdd-ad7b-270a01966704-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: E0216 21:08:04.971777 4811 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.971903 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/57c3d8d6-3964-4cdd-ad7b-270a01966704-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: E0216 21:08:04.972027 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57c3d8d6-3964-4cdd-ad7b-270a01966704-plugin-serving-cert podName:57c3d8d6-3964-4cdd-ad7b-270a01966704 nodeName:}" failed. No retries permitted until 2026-02-16 21:08:05.471952831 +0000 UTC m=+703.401248769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/57c3d8d6-3964-4cdd-ad7b-270a01966704-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-n8wkn" (UID: "57c3d8d6-3964-4cdd-ad7b-270a01966704") : secret "plugin-serving-cert" not found Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.978170 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f4dbc997-gcnjt"] Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.979079 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.983810 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.992661 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpp5c\" (UniqueName: \"kubernetes.io/projected/57c3d8d6-3964-4cdd-ad7b-270a01966704-kube-api-access-bpp5c\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:04 crc kubenswrapper[4811]: I0216 21:08:04.997019 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f4dbc997-gcnjt"] Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.006330 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.026731 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072533 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51e2991e-bf32-4634-ad54-24805ecca55e-console-oauth-config\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072585 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-service-ca\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072646 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51e2991e-bf32-4634-ad54-24805ecca55e-console-serving-cert\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072670 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghsx\" (UniqueName: \"kubernetes.io/projected/51e2991e-bf32-4634-ad54-24805ecca55e-kube-api-access-bghsx\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072686 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-trusted-ca-bundle\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072700 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-oauth-serving-cert\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.072742 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-console-config\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.173810 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51e2991e-bf32-4634-ad54-24805ecca55e-console-oauth-config\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.174312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-service-ca\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.174389 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51e2991e-bf32-4634-ad54-24805ecca55e-console-serving-cert\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.174412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghsx\" (UniqueName: \"kubernetes.io/projected/51e2991e-bf32-4634-ad54-24805ecca55e-kube-api-access-bghsx\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.174434 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-oauth-serving-cert\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.174458 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-trusted-ca-bundle\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.174499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-console-config\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.175838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-service-ca\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.176076 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-console-config\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.176678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-trusted-ca-bundle\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.176714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51e2991e-bf32-4634-ad54-24805ecca55e-oauth-serving-cert\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.186545 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51e2991e-bf32-4634-ad54-24805ecca55e-console-oauth-config\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.187522 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51e2991e-bf32-4634-ad54-24805ecca55e-console-serving-cert\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.194374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghsx\" (UniqueName: \"kubernetes.io/projected/51e2991e-bf32-4634-ad54-24805ecca55e-kube-api-access-bghsx\") pod \"console-5f4dbc997-gcnjt\" (UID: \"51e2991e-bf32-4634-ad54-24805ecca55e\") " pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.339553 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.394861 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87"] Feb 16 21:08:05 crc kubenswrapper[4811]: W0216 21:08:05.435307 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6655fbcb_d36f_45c8_a8b9_233070bddb6e.slice/crio-e6266eede371451af4fa1065286f48b5873f80b4e19ca6cf574d17af7efba123 WatchSource:0}: Error finding container e6266eede371451af4fa1065286f48b5873f80b4e19ca6cf574d17af7efba123: Status 404 returned error can't find the container with id e6266eede371451af4fa1065286f48b5873f80b4e19ca6cf574d17af7efba123 Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.468851 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr"] Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.491954 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/57c3d8d6-3964-4cdd-ad7b-270a01966704-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.496774 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/57c3d8d6-3964-4cdd-ad7b-270a01966704-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n8wkn\" (UID: \"57c3d8d6-3964-4cdd-ad7b-270a01966704\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.624328 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f4dbc997-gcnjt"] Feb 16 21:08:05 crc kubenswrapper[4811]: W0216 21:08:05.629506 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51e2991e_bf32_4634_ad54_24805ecca55e.slice/crio-72f1dc599558271c7feecd8107cc35d1374e50454cfb7147f7130bda594c6651 WatchSource:0}: Error finding container 72f1dc599558271c7feecd8107cc35d1374e50454cfb7147f7130bda594c6651: Status 404 returned error can't find the container with id 72f1dc599558271c7feecd8107cc35d1374e50454cfb7147f7130bda594c6651 Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.722396 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.924546 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" event={"ID":"6655fbcb-d36f-45c8-a8b9-233070bddb6e","Type":"ContainerStarted","Data":"e6266eede371451af4fa1065286f48b5873f80b4e19ca6cf574d17af7efba123"} Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.926616 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v68rf" event={"ID":"11976713-8674-4ea3-829a-b5ce035052bb","Type":"ContainerStarted","Data":"7cff4002f297f359bc7e28ea69e42d6217c79bcfe5969c22c69c0d5de86f09c6"} Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.928095 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4dbc997-gcnjt" event={"ID":"51e2991e-bf32-4634-ad54-24805ecca55e","Type":"ContainerStarted","Data":"abdccea463579d9cb94423c15c769608b2af66667d0b44338985f15b1d5d8128"} Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.928175 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4dbc997-gcnjt" event={"ID":"51e2991e-bf32-4634-ad54-24805ecca55e","Type":"ContainerStarted","Data":"72f1dc599558271c7feecd8107cc35d1374e50454cfb7147f7130bda594c6651"} Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.935821 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" event={"ID":"4f24dbb8-fb2e-4076-a050-2fbcdbbceefd","Type":"ContainerStarted","Data":"4d9416a4ddada37a9f4044e98a1fff98ebc0f68a9932a3528844aa7e0870af80"} Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.938975 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn"] Feb 16 21:08:05 crc kubenswrapper[4811]: I0216 21:08:05.948666 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f4dbc997-gcnjt" podStartSLOduration=1.948643332 podStartE2EDuration="1.948643332s" podCreationTimestamp="2026-02-16 21:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:08:05.942653346 +0000 UTC m=+703.871949294" watchObservedRunningTime="2026-02-16 21:08:05.948643332 +0000 UTC m=+703.877939280" Feb 16 21:08:05 crc kubenswrapper[4811]: W0216 21:08:05.949411 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57c3d8d6_3964_4cdd_ad7b_270a01966704.slice/crio-a142f9844c25257f57ceaa7d41af1d545960a7cff5dab9c5c92555b1008b8f59 WatchSource:0}: Error finding container a142f9844c25257f57ceaa7d41af1d545960a7cff5dab9c5c92555b1008b8f59: Status 404 returned error can't find the container with id a142f9844c25257f57ceaa7d41af1d545960a7cff5dab9c5c92555b1008b8f59 Feb 16 21:08:06 crc kubenswrapper[4811]: I0216 21:08:06.945690 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" event={"ID":"57c3d8d6-3964-4cdd-ad7b-270a01966704","Type":"ContainerStarted","Data":"a142f9844c25257f57ceaa7d41af1d545960a7cff5dab9c5c92555b1008b8f59"} Feb 16 21:08:08 crc kubenswrapper[4811]: I0216 21:08:08.966895 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" event={"ID":"4f24dbb8-fb2e-4076-a050-2fbcdbbceefd","Type":"ContainerStarted","Data":"d3421e8326f92a0701ee4cc97c7be1a1ffcb042369eea7a33261ac27ce0e9b51"} Feb 16 21:08:08 crc kubenswrapper[4811]: I0216 21:08:08.970350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v68rf" event={"ID":"11976713-8674-4ea3-829a-b5ce035052bb","Type":"ContainerStarted","Data":"5964ffd687f2ed85b34ff8bf8f0610129bb92567dfd53547b5b493451ff5487e"} Feb 16 21:08:08 crc kubenswrapper[4811]: I0216 21:08:08.970502 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:08 crc kubenswrapper[4811]: I0216 21:08:08.972277 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" event={"ID":"6655fbcb-d36f-45c8-a8b9-233070bddb6e","Type":"ContainerStarted","Data":"79585d315b9b83b3c90f3155e1f0c592e34b637aea02bd263673964f775f7083"} Feb 16 21:08:08 crc kubenswrapper[4811]: I0216 21:08:08.972644 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:08 crc kubenswrapper[4811]: I0216 21:08:08.990755 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v68rf" podStartSLOduration=2.149172364 podStartE2EDuration="4.990739918s" podCreationTimestamp="2026-02-16 21:08:04 +0000 UTC" firstStartedPulling="2026-02-16 21:08:05.051447392 +0000 UTC m=+702.980743330" lastFinishedPulling="2026-02-16 21:08:07.893014946 +0000 UTC m=+705.822310884" observedRunningTime="2026-02-16 21:08:08.99040673 +0000 UTC m=+706.919702668" watchObservedRunningTime="2026-02-16 21:08:08.990739918 +0000 UTC m=+706.920035856" Feb 16 21:08:09 crc kubenswrapper[4811]: I0216 21:08:09.011978 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" podStartSLOduration=2.569168823 podStartE2EDuration="5.011960773s" podCreationTimestamp="2026-02-16 21:08:04 +0000 UTC" firstStartedPulling="2026-02-16 21:08:05.440516151 +0000 UTC m=+703.369812089" lastFinishedPulling="2026-02-16 21:08:07.883308091 +0000 UTC m=+705.812604039" observedRunningTime="2026-02-16 21:08:09.00566576 +0000 UTC m=+706.934961708" watchObservedRunningTime="2026-02-16 21:08:09.011960773 +0000 UTC m=+706.941256711" Feb 16 21:08:09 crc kubenswrapper[4811]: I0216 21:08:09.984232 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" event={"ID":"57c3d8d6-3964-4cdd-ad7b-270a01966704","Type":"ContainerStarted","Data":"f0d604f4d48aba607e6069f7e10d83d4ce79e8ecb379e388700feacb1b0702e0"} Feb 16 21:08:10 crc kubenswrapper[4811]: I0216 21:08:10.010873 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n8wkn" podStartSLOduration=3.087535191 podStartE2EDuration="6.010852113s" podCreationTimestamp="2026-02-16 21:08:04 +0000 UTC" firstStartedPulling="2026-02-16 21:08:05.951022539 +0000 UTC m=+703.880318477" lastFinishedPulling="2026-02-16 21:08:08.874339461 +0000 UTC m=+706.803635399" observedRunningTime="2026-02-16 21:08:10.001453125 +0000 UTC m=+707.930749083" watchObservedRunningTime="2026-02-16 21:08:10.010852113 +0000 UTC m=+707.940148071" Feb 16 21:08:12 crc kubenswrapper[4811]: I0216 21:08:12.006616 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" event={"ID":"4f24dbb8-fb2e-4076-a050-2fbcdbbceefd","Type":"ContainerStarted","Data":"283ff3dbb89788f4053cf9e73f69cdbc17c7e77c8f4430369ef41864d7b08227"} Feb 16 21:08:12 crc kubenswrapper[4811]: I0216 21:08:12.040586 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9jlhr" podStartSLOduration=2.671815317 podStartE2EDuration="8.040552841s" podCreationTimestamp="2026-02-16 21:08:04 +0000 UTC" firstStartedPulling="2026-02-16 21:08:05.483242009 +0000 UTC m=+703.412537947" lastFinishedPulling="2026-02-16 21:08:10.851979533 +0000 UTC m=+708.781275471" observedRunningTime="2026-02-16 21:08:12.030860445 +0000 UTC m=+709.960156433" watchObservedRunningTime="2026-02-16 21:08:12.040552841 +0000 UTC m=+709.969848809" Feb 16 21:08:15 crc kubenswrapper[4811]: I0216 21:08:15.065418 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v68rf" Feb 16 21:08:15 crc kubenswrapper[4811]: I0216 21:08:15.340072 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:15 crc kubenswrapper[4811]: I0216 21:08:15.340142 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:15 crc kubenswrapper[4811]: I0216 21:08:15.346838 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:16 crc kubenswrapper[4811]: I0216 21:08:16.043045 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f4dbc997-gcnjt" Feb 16 21:08:16 crc kubenswrapper[4811]: I0216 21:08:16.110730 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8vgph"] Feb 16 21:08:18 crc kubenswrapper[4811]: I0216 21:08:18.363821 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:08:18 crc kubenswrapper[4811]: I0216 21:08:18.364164 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:08:25 crc kubenswrapper[4811]: I0216 21:08:25.015900 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-dfn87" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.313462 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5"] Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.314856 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.320758 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.342314 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5"] Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.435617 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.435672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.435716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6rf\" (UniqueName: \"kubernetes.io/projected/e179a5d8-431a-42cf-b2cc-848631cb784a-kube-api-access-lc6rf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.537342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.537418 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.537451 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc6rf\" (UniqueName: \"kubernetes.io/projected/e179a5d8-431a-42cf-b2cc-848631cb784a-kube-api-access-lc6rf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.537951 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.538015 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.560226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc6rf\" (UniqueName: \"kubernetes.io/projected/e179a5d8-431a-42cf-b2cc-848631cb784a-kube-api-access-lc6rf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:40 crc kubenswrapper[4811]: I0216 21:08:40.649826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.095705 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5"] Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.175862 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8vgph" podUID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" containerName="console" containerID="cri-o://a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2" gracePeriod=15 Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.206883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" event={"ID":"e179a5d8-431a-42cf-b2cc-848631cb784a","Type":"ContainerStarted","Data":"113a5ffe4e8e653e9047c1c22436716c7968641293d5e97743da7b9739c91802"} Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.483922 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8vgph_a00560cb-dc2f-489d-a2b1-aaecee43f0d3/console/0.log" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.484522 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.580778 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-oauth-config\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.580863 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-config\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.581003 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-serving-cert\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.581064 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-trusted-ca-bundle\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.581129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-oauth-serving-cert\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.581184 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-service-ca\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.581256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6hqw\" (UniqueName: \"kubernetes.io/projected/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-kube-api-access-v6hqw\") pod \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\" (UID: \"a00560cb-dc2f-489d-a2b1-aaecee43f0d3\") " Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.581988 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-config" (OuterVolumeSpecName: "console-config") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.582060 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.582072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.582095 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-service-ca" (OuterVolumeSpecName: "service-ca") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.590149 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.590190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.590269 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-kube-api-access-v6hqw" (OuterVolumeSpecName: "kube-api-access-v6hqw") pod "a00560cb-dc2f-489d-a2b1-aaecee43f0d3" (UID: "a00560cb-dc2f-489d-a2b1-aaecee43f0d3"). InnerVolumeSpecName "kube-api-access-v6hqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682895 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682937 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682948 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682956 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682966 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682975 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:41 crc kubenswrapper[4811]: I0216 21:08:41.682982 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6hqw\" (UniqueName: \"kubernetes.io/projected/a00560cb-dc2f-489d-a2b1-aaecee43f0d3-kube-api-access-v6hqw\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.218432 4811 generic.go:334] "Generic (PLEG): container finished" podID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerID="ea1d1ed38119c3202e8afbdfb7600747a5cc4afde3eab637a6ee7013ce51e9e9" exitCode=0 Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.218561 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" event={"ID":"e179a5d8-431a-42cf-b2cc-848631cb784a","Type":"ContainerDied","Data":"ea1d1ed38119c3202e8afbdfb7600747a5cc4afde3eab637a6ee7013ce51e9e9"} Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.223762 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8vgph_a00560cb-dc2f-489d-a2b1-aaecee43f0d3/console/0.log" Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.223831 4811 generic.go:334] "Generic (PLEG): container finished" podID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" containerID="a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2" exitCode=2 Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.223869 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8vgph" event={"ID":"a00560cb-dc2f-489d-a2b1-aaecee43f0d3","Type":"ContainerDied","Data":"a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2"} Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.223909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8vgph" event={"ID":"a00560cb-dc2f-489d-a2b1-aaecee43f0d3","Type":"ContainerDied","Data":"27a51f336ab8049cddf086ec6d61a98e3a2f5499f0930dc58b03a52e58554aa9"} Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.223939 4811 scope.go:117] "RemoveContainer" containerID="a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2" Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.224101 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8vgph" Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.250695 4811 scope.go:117] "RemoveContainer" containerID="a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2" Feb 16 21:08:42 crc kubenswrapper[4811]: E0216 21:08:42.251749 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2\": container with ID starting with a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2 not found: ID does not exist" containerID="a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2" Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.251940 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2"} err="failed to get container status \"a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2\": rpc error: code = NotFound desc = could not find container \"a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2\": container with ID starting with a29bfb90b21d31192969bdf98f8a4de23df56ea3dff81ecbdbc127698ca566e2 not found: ID does not exist" Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.282564 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8vgph"] Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.287891 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8vgph"] Feb 16 21:08:42 crc kubenswrapper[4811]: I0216 21:08:42.717719 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" path="/var/lib/kubelet/pods/a00560cb-dc2f-489d-a2b1-aaecee43f0d3/volumes" Feb 16 21:08:44 crc kubenswrapper[4811]: I0216 21:08:44.242658 4811 generic.go:334] "Generic (PLEG): container finished" podID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerID="43f039ffd1f193923b8e5d10b9d2078ee5805f13933e14a68dc1d023763bee85" exitCode=0 Feb 16 21:08:44 crc kubenswrapper[4811]: I0216 21:08:44.242789 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" event={"ID":"e179a5d8-431a-42cf-b2cc-848631cb784a","Type":"ContainerDied","Data":"43f039ffd1f193923b8e5d10b9d2078ee5805f13933e14a68dc1d023763bee85"} Feb 16 21:08:45 crc kubenswrapper[4811]: I0216 21:08:45.256651 4811 generic.go:334] "Generic (PLEG): container finished" podID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerID="a496dec4de56147efab83171fa3233d4769ae13d38f0407c9d20c1a3af45514e" exitCode=0 Feb 16 21:08:45 crc kubenswrapper[4811]: I0216 21:08:45.256737 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" event={"ID":"e179a5d8-431a-42cf-b2cc-848631cb784a","Type":"ContainerDied","Data":"a496dec4de56147efab83171fa3233d4769ae13d38f0407c9d20c1a3af45514e"} Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.575080 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.657822 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-util\") pod \"e179a5d8-431a-42cf-b2cc-848631cb784a\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.657909 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-bundle\") pod \"e179a5d8-431a-42cf-b2cc-848631cb784a\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.658002 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc6rf\" (UniqueName: \"kubernetes.io/projected/e179a5d8-431a-42cf-b2cc-848631cb784a-kube-api-access-lc6rf\") pod \"e179a5d8-431a-42cf-b2cc-848631cb784a\" (UID: \"e179a5d8-431a-42cf-b2cc-848631cb784a\") " Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.660319 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-bundle" (OuterVolumeSpecName: "bundle") pod "e179a5d8-431a-42cf-b2cc-848631cb784a" (UID: "e179a5d8-431a-42cf-b2cc-848631cb784a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.668511 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e179a5d8-431a-42cf-b2cc-848631cb784a-kube-api-access-lc6rf" (OuterVolumeSpecName: "kube-api-access-lc6rf") pod "e179a5d8-431a-42cf-b2cc-848631cb784a" (UID: "e179a5d8-431a-42cf-b2cc-848631cb784a"). InnerVolumeSpecName "kube-api-access-lc6rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.670035 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-util" (OuterVolumeSpecName: "util") pod "e179a5d8-431a-42cf-b2cc-848631cb784a" (UID: "e179a5d8-431a-42cf-b2cc-848631cb784a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.760524 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.760559 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e179a5d8-431a-42cf-b2cc-848631cb784a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:46 crc kubenswrapper[4811]: I0216 21:08:46.760572 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc6rf\" (UniqueName: \"kubernetes.io/projected/e179a5d8-431a-42cf-b2cc-848631cb784a-kube-api-access-lc6rf\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:47 crc kubenswrapper[4811]: I0216 21:08:47.273042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" event={"ID":"e179a5d8-431a-42cf-b2cc-848631cb784a","Type":"ContainerDied","Data":"113a5ffe4e8e653e9047c1c22436716c7968641293d5e97743da7b9739c91802"} Feb 16 21:08:47 crc kubenswrapper[4811]: I0216 21:08:47.273340 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113a5ffe4e8e653e9047c1c22436716c7968641293d5e97743da7b9739c91802" Feb 16 21:08:47 crc kubenswrapper[4811]: I0216 21:08:47.273113 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5" Feb 16 21:08:48 crc kubenswrapper[4811]: I0216 21:08:48.364467 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:08:48 crc kubenswrapper[4811]: I0216 21:08:48.364565 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.382360 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-84454db595-l2tp8"] Feb 16 21:08:55 crc kubenswrapper[4811]: E0216 21:08:55.383166 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="extract" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383181 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="extract" Feb 16 21:08:55 crc kubenswrapper[4811]: E0216 21:08:55.383209 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="pull" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383217 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="pull" Feb 16 21:08:55 crc kubenswrapper[4811]: E0216 21:08:55.383227 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" containerName="console" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383233 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" containerName="console" Feb 16 21:08:55 crc kubenswrapper[4811]: E0216 21:08:55.383250 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="util" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383259 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="util" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383390 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00560cb-dc2f-489d-a2b1-aaecee43f0d3" containerName="console" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383404 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e179a5d8-431a-42cf-b2cc-848631cb784a" containerName="extract" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.383911 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.386077 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.386097 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.386181 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.386252 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.387705 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-l5lcs" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.400557 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-84454db595-l2tp8"] Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.477364 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4d8c2078-639b-41a8-9ac9-58e8b6315d05-webhook-cert\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.477419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4d8c2078-639b-41a8-9ac9-58e8b6315d05-apiservice-cert\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.477457 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf24q\" (UniqueName: \"kubernetes.io/projected/4d8c2078-639b-41a8-9ac9-58e8b6315d05-kube-api-access-rf24q\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.578515 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4d8c2078-639b-41a8-9ac9-58e8b6315d05-webhook-cert\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.578565 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4d8c2078-639b-41a8-9ac9-58e8b6315d05-apiservice-cert\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.578611 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf24q\" (UniqueName: \"kubernetes.io/projected/4d8c2078-639b-41a8-9ac9-58e8b6315d05-kube-api-access-rf24q\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.584799 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4d8c2078-639b-41a8-9ac9-58e8b6315d05-webhook-cert\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.590915 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4d8c2078-639b-41a8-9ac9-58e8b6315d05-apiservice-cert\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.599115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf24q\" (UniqueName: \"kubernetes.io/projected/4d8c2078-639b-41a8-9ac9-58e8b6315d05-kube-api-access-rf24q\") pod \"metallb-operator-controller-manager-84454db595-l2tp8\" (UID: \"4d8c2078-639b-41a8-9ac9-58e8b6315d05\") " pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.700841 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.741000 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l"] Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.741919 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.743824 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-skpw7" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.744099 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.744694 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.757986 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l"] Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.781381 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-apiservice-cert\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.781445 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzs94\" (UniqueName: \"kubernetes.io/projected/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-kube-api-access-pzs94\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.781543 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-webhook-cert\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.882358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-apiservice-cert\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.882753 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzs94\" (UniqueName: \"kubernetes.io/projected/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-kube-api-access-pzs94\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.882813 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-webhook-cert\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.889633 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-apiservice-cert\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.892664 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-webhook-cert\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.906945 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzs94\" (UniqueName: \"kubernetes.io/projected/e42eebc8-e49d-4af3-9ab6-c2c2ca258e81-kube-api-access-pzs94\") pod \"metallb-operator-webhook-server-7899c768f-d4x8l\" (UID: \"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81\") " pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:55 crc kubenswrapper[4811]: I0216 21:08:55.964121 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-84454db595-l2tp8"] Feb 16 21:08:55 crc kubenswrapper[4811]: W0216 21:08:55.970431 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d8c2078_639b_41a8_9ac9_58e8b6315d05.slice/crio-5d62c6a2865312711c47db0011f9127021fd97fa699fdcd6dc0a0e73502497c2 WatchSource:0}: Error finding container 5d62c6a2865312711c47db0011f9127021fd97fa699fdcd6dc0a0e73502497c2: Status 404 returned error can't find the container with id 5d62c6a2865312711c47db0011f9127021fd97fa699fdcd6dc0a0e73502497c2 Feb 16 21:08:56 crc kubenswrapper[4811]: I0216 21:08:56.077612 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:08:56 crc kubenswrapper[4811]: I0216 21:08:56.352338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" event={"ID":"4d8c2078-639b-41a8-9ac9-58e8b6315d05","Type":"ContainerStarted","Data":"5d62c6a2865312711c47db0011f9127021fd97fa699fdcd6dc0a0e73502497c2"} Feb 16 21:08:56 crc kubenswrapper[4811]: I0216 21:08:56.372822 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l"] Feb 16 21:08:56 crc kubenswrapper[4811]: W0216 21:08:56.381083 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42eebc8_e49d_4af3_9ab6_c2c2ca258e81.slice/crio-8ff4cc130f3b961c50109d369e64498fe12f5d58f779f4558c78dc18bf2713d4 WatchSource:0}: Error finding container 8ff4cc130f3b961c50109d369e64498fe12f5d58f779f4558c78dc18bf2713d4: Status 404 returned error can't find the container with id 8ff4cc130f3b961c50109d369e64498fe12f5d58f779f4558c78dc18bf2713d4 Feb 16 21:08:57 crc kubenswrapper[4811]: I0216 21:08:57.361001 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" event={"ID":"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81","Type":"ContainerStarted","Data":"8ff4cc130f3b961c50109d369e64498fe12f5d58f779f4558c78dc18bf2713d4"} Feb 16 21:08:59 crc kubenswrapper[4811]: I0216 21:08:59.868426 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:09:01 crc kubenswrapper[4811]: I0216 21:09:01.391743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" event={"ID":"e42eebc8-e49d-4af3-9ab6-c2c2ca258e81","Type":"ContainerStarted","Data":"ded8e84d2a379d562fb1cb244e13a25ae58c5cc6276633953858be595e419e1c"} Feb 16 21:09:01 crc kubenswrapper[4811]: I0216 21:09:01.392126 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:09:01 crc kubenswrapper[4811]: I0216 21:09:01.412152 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" podStartSLOduration=2.050833932 podStartE2EDuration="6.412133591s" podCreationTimestamp="2026-02-16 21:08:55 +0000 UTC" firstStartedPulling="2026-02-16 21:08:56.385621308 +0000 UTC m=+754.314917246" lastFinishedPulling="2026-02-16 21:09:00.746920967 +0000 UTC m=+758.676216905" observedRunningTime="2026-02-16 21:09:01.411868294 +0000 UTC m=+759.341164232" watchObservedRunningTime="2026-02-16 21:09:01.412133591 +0000 UTC m=+759.341429529" Feb 16 21:09:03 crc kubenswrapper[4811]: I0216 21:09:03.406239 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" event={"ID":"4d8c2078-639b-41a8-9ac9-58e8b6315d05","Type":"ContainerStarted","Data":"844bbd9293045979a7fd0841b429b9f4cf133091c0667019ad6552fa547e7963"} Feb 16 21:09:03 crc kubenswrapper[4811]: I0216 21:09:03.406551 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:09:03 crc kubenswrapper[4811]: I0216 21:09:03.432130 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" podStartSLOduration=1.4613480540000001 podStartE2EDuration="8.432105543s" podCreationTimestamp="2026-02-16 21:08:55 +0000 UTC" firstStartedPulling="2026-02-16 21:08:55.97370107 +0000 UTC m=+753.902997008" lastFinishedPulling="2026-02-16 21:09:02.944458519 +0000 UTC m=+760.873754497" observedRunningTime="2026-02-16 21:09:03.42688943 +0000 UTC m=+761.356185398" watchObservedRunningTime="2026-02-16 21:09:03.432105543 +0000 UTC m=+761.361401501" Feb 16 21:09:16 crc kubenswrapper[4811]: I0216 21:09:16.082678 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7899c768f-d4x8l" Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.363896 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.364232 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.364272 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.364817 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15b3c1409544ddca121710199668aff9f31624230e68744253cb5ac3f7bbbf00"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.364865 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://15b3c1409544ddca121710199668aff9f31624230e68744253cb5ac3f7bbbf00" gracePeriod=600 Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.515892 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="15b3c1409544ddca121710199668aff9f31624230e68744253cb5ac3f7bbbf00" exitCode=0 Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.515941 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"15b3c1409544ddca121710199668aff9f31624230e68744253cb5ac3f7bbbf00"} Feb 16 21:09:18 crc kubenswrapper[4811]: I0216 21:09:18.515992 4811 scope.go:117] "RemoveContainer" containerID="1f0a256388bab5ae3a75d81440eaebf36f0fd6fc190dadf86a4b8d117b1e9e11" Feb 16 21:09:19 crc kubenswrapper[4811]: I0216 21:09:19.527830 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"aec5c764f743f1a4d04f239fd31aa099d13a84893ba733482b70a62ad8b5e0d2"} Feb 16 21:09:35 crc kubenswrapper[4811]: I0216 21:09:35.702892 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-84454db595-l2tp8" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.440721 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rwqbc"] Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.444211 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.447857 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qnfcr" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.448132 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.448482 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.459646 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9"] Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.460685 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.464076 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.480487 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9"] Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.520788 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qvj4b"] Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.521757 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.526547 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-p4d6k" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.526721 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.526846 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.526966 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.531969 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc88841d-ab30-446e-a4f1-f7e37902c90d-metrics-certs\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532032 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8w7t\" (UniqueName: \"kubernetes.io/projected/dc88841d-ab30-446e-a4f1-f7e37902c90d-kube-api-access-m8w7t\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532076 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-metrics\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532153 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0926b4d-4fed-4543-abd8-1e1cc65983f6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532182 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metallb-excludel2\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532237 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532355 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metrics-certs\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532408 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8c2\" (UniqueName: \"kubernetes.io/projected/dfcb7d78-0504-47c4-a5bc-05f382feefaa-kube-api-access-bj8c2\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-sockets\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532471 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-startup\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvm5\" (UniqueName: \"kubernetes.io/projected/b0926b4d-4fed-4543-abd8-1e1cc65983f6-kube-api-access-6nvm5\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-conf\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.532538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-reloader\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.540578 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-pnpzd"] Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.542165 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.545447 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.557996 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-pnpzd"] Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.632900 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-metrics\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.632943 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa405c88-9fb5-47e9-b4e6-70813ede9574-cert\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.632965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0926b4d-4fed-4543-abd8-1e1cc65983f6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.632987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metallb-excludel2\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633012 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633035 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metrics-certs\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633054 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj8c2\" (UniqueName: \"kubernetes.io/projected/dfcb7d78-0504-47c4-a5bc-05f382feefaa-kube-api-access-bj8c2\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-sockets\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633091 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-startup\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633108 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nvm5\" (UniqueName: \"kubernetes.io/projected/b0926b4d-4fed-4543-abd8-1e1cc65983f6-kube-api-access-6nvm5\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-conf\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633150 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-reloader\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msztj\" (UniqueName: \"kubernetes.io/projected/fa405c88-9fb5-47e9-b4e6-70813ede9574-kube-api-access-msztj\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633213 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa405c88-9fb5-47e9-b4e6-70813ede9574-metrics-certs\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633231 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc88841d-ab30-446e-a4f1-f7e37902c90d-metrics-certs\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8w7t\" (UniqueName: \"kubernetes.io/projected/dc88841d-ab30-446e-a4f1-f7e37902c90d-kube-api-access-m8w7t\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: E0216 21:09:36.633171 4811 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 21:09:36 crc kubenswrapper[4811]: E0216 21:09:36.633538 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0926b4d-4fed-4543-abd8-1e1cc65983f6-cert podName:b0926b4d-4fed-4543-abd8-1e1cc65983f6 nodeName:}" failed. No retries permitted until 2026-02-16 21:09:37.133521554 +0000 UTC m=+795.062817492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0926b4d-4fed-4543-abd8-1e1cc65983f6-cert") pod "frr-k8s-webhook-server-78b44bf5bb-5ndl9" (UID: "b0926b4d-4fed-4543-abd8-1e1cc65983f6") : secret "frr-k8s-webhook-server-cert" not found Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633782 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-reloader\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-conf\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: E0216 21:09:36.633172 4811 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:09:36 crc kubenswrapper[4811]: E0216 21:09:36.633849 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist podName:dfcb7d78-0504-47c4-a5bc-05f382feefaa nodeName:}" failed. No retries permitted until 2026-02-16 21:09:37.133841642 +0000 UTC m=+795.063137580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist") pod "speaker-qvj4b" (UID: "dfcb7d78-0504-47c4-a5bc-05f382feefaa") : secret "metallb-memberlist" not found Feb 16 21:09:36 crc kubenswrapper[4811]: E0216 21:09:36.633477 4811 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633859 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-sockets\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: E0216 21:09:36.633870 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metrics-certs podName:dfcb7d78-0504-47c4-a5bc-05f382feefaa nodeName:}" failed. No retries permitted until 2026-02-16 21:09:37.133864052 +0000 UTC m=+795.063160000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metrics-certs") pod "speaker-qvj4b" (UID: "dfcb7d78-0504-47c4-a5bc-05f382feefaa") : secret "speaker-certs-secret" not found Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.633990 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dc88841d-ab30-446e-a4f1-f7e37902c90d-metrics\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.634139 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metallb-excludel2\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.634166 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dc88841d-ab30-446e-a4f1-f7e37902c90d-frr-startup\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.641830 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc88841d-ab30-446e-a4f1-f7e37902c90d-metrics-certs\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.650898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8w7t\" (UniqueName: \"kubernetes.io/projected/dc88841d-ab30-446e-a4f1-f7e37902c90d-kube-api-access-m8w7t\") pod \"frr-k8s-rwqbc\" (UID: \"dc88841d-ab30-446e-a4f1-f7e37902c90d\") " pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.653808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj8c2\" (UniqueName: \"kubernetes.io/projected/dfcb7d78-0504-47c4-a5bc-05f382feefaa-kube-api-access-bj8c2\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.663347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nvm5\" (UniqueName: \"kubernetes.io/projected/b0926b4d-4fed-4543-abd8-1e1cc65983f6-kube-api-access-6nvm5\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.733957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa405c88-9fb5-47e9-b4e6-70813ede9574-cert\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.734123 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msztj\" (UniqueName: \"kubernetes.io/projected/fa405c88-9fb5-47e9-b4e6-70813ede9574-kube-api-access-msztj\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.734159 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa405c88-9fb5-47e9-b4e6-70813ede9574-metrics-certs\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.736973 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.737155 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa405c88-9fb5-47e9-b4e6-70813ede9574-metrics-certs\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.748185 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa405c88-9fb5-47e9-b4e6-70813ede9574-cert\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.750396 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msztj\" (UniqueName: \"kubernetes.io/projected/fa405c88-9fb5-47e9-b4e6-70813ede9574-kube-api-access-msztj\") pod \"controller-69bbfbf88f-pnpzd\" (UID: \"fa405c88-9fb5-47e9-b4e6-70813ede9574\") " pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.782055 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:36 crc kubenswrapper[4811]: I0216 21:09:36.858322 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.139504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0926b4d-4fed-4543-abd8-1e1cc65983f6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.139947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.139990 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metrics-certs\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:37 crc kubenswrapper[4811]: E0216 21:09:37.140123 4811 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:09:37 crc kubenswrapper[4811]: E0216 21:09:37.140246 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist podName:dfcb7d78-0504-47c4-a5bc-05f382feefaa nodeName:}" failed. No retries permitted until 2026-02-16 21:09:38.140221025 +0000 UTC m=+796.069516963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist") pod "speaker-qvj4b" (UID: "dfcb7d78-0504-47c4-a5bc-05f382feefaa") : secret "metallb-memberlist" not found Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.145137 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-metrics-certs\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.145441 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0926b4d-4fed-4543-abd8-1e1cc65983f6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-5ndl9\" (UID: \"b0926b4d-4fed-4543-abd8-1e1cc65983f6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.336784 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-pnpzd"] Feb 16 21:09:37 crc kubenswrapper[4811]: W0216 21:09:37.343109 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa405c88_9fb5_47e9_b4e6_70813ede9574.slice/crio-d5d90a9525696e06fe92742e781f6192237da48dae04ca7536eaf85ee8034954 WatchSource:0}: Error finding container d5d90a9525696e06fe92742e781f6192237da48dae04ca7536eaf85ee8034954: Status 404 returned error can't find the container with id d5d90a9525696e06fe92742e781f6192237da48dae04ca7536eaf85ee8034954 Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.394392 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.667709 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-pnpzd" event={"ID":"fa405c88-9fb5-47e9-b4e6-70813ede9574","Type":"ContainerStarted","Data":"26e43cebe169ff477049a20a2b8c4479a884d1cb6fbd5a7bb207e0dd681d32bc"} Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.668113 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-pnpzd" event={"ID":"fa405c88-9fb5-47e9-b4e6-70813ede9574","Type":"ContainerStarted","Data":"d5d90a9525696e06fe92742e781f6192237da48dae04ca7536eaf85ee8034954"} Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.668936 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"9ad7aadbb58ac9a364107333dbbe1535dc915e914834b4023705736489308685"} Feb 16 21:09:37 crc kubenswrapper[4811]: I0216 21:09:37.686079 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9"] Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.154327 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.180034 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dfcb7d78-0504-47c4-a5bc-05f382feefaa-memberlist\") pod \"speaker-qvj4b\" (UID: \"dfcb7d78-0504-47c4-a5bc-05f382feefaa\") " pod="metallb-system/speaker-qvj4b" Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.337555 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qvj4b" Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.678288 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-pnpzd" event={"ID":"fa405c88-9fb5-47e9-b4e6-70813ede9574","Type":"ContainerStarted","Data":"64dfa56f6e3f8420fb2b8b33b30e95d564f808911d4ffa3fb2390c79b4719e4d"} Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.679240 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.681159 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" event={"ID":"b0926b4d-4fed-4543-abd8-1e1cc65983f6","Type":"ContainerStarted","Data":"296639989ac3f9071b343fdc7cba9fd405ecb1654c2f3970c99ef86da88d20ae"} Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.689745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qvj4b" event={"ID":"dfcb7d78-0504-47c4-a5bc-05f382feefaa","Type":"ContainerStarted","Data":"33c65625820956e99739152991ca2e5496e70ec42645db7ff800949a609e7b60"} Feb 16 21:09:38 crc kubenswrapper[4811]: I0216 21:09:38.733878 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-pnpzd" podStartSLOduration=2.733852225 podStartE2EDuration="2.733852225s" podCreationTimestamp="2026-02-16 21:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:09:38.731908556 +0000 UTC m=+796.661204504" watchObservedRunningTime="2026-02-16 21:09:38.733852225 +0000 UTC m=+796.663148163" Feb 16 21:09:39 crc kubenswrapper[4811]: I0216 21:09:39.705131 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qvj4b" event={"ID":"dfcb7d78-0504-47c4-a5bc-05f382feefaa","Type":"ContainerStarted","Data":"5b87e7800a3b799a4acaf5826e10854116ce980091435fe83f5b8ba02ef21b8f"} Feb 16 21:09:39 crc kubenswrapper[4811]: I0216 21:09:39.705408 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qvj4b" event={"ID":"dfcb7d78-0504-47c4-a5bc-05f382feefaa","Type":"ContainerStarted","Data":"f8c7e2a3eee06faf1a93765a706810a27fc9ca833611c7ad3b1d43bf3004bfc4"} Feb 16 21:09:39 crc kubenswrapper[4811]: I0216 21:09:39.705423 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qvj4b" Feb 16 21:09:39 crc kubenswrapper[4811]: I0216 21:09:39.723832 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qvj4b" podStartSLOduration=3.72380332 podStartE2EDuration="3.72380332s" podCreationTimestamp="2026-02-16 21:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:09:39.722411375 +0000 UTC m=+797.651707313" watchObservedRunningTime="2026-02-16 21:09:39.72380332 +0000 UTC m=+797.653099258" Feb 16 21:09:44 crc kubenswrapper[4811]: I0216 21:09:44.738308 4811 generic.go:334] "Generic (PLEG): container finished" podID="dc88841d-ab30-446e-a4f1-f7e37902c90d" containerID="54a2da846ccbbaa9ec3185037778705c289f195ac292d05aa5166d6fb7d4409b" exitCode=0 Feb 16 21:09:44 crc kubenswrapper[4811]: I0216 21:09:44.738405 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerDied","Data":"54a2da846ccbbaa9ec3185037778705c289f195ac292d05aa5166d6fb7d4409b"} Feb 16 21:09:44 crc kubenswrapper[4811]: I0216 21:09:44.741230 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" event={"ID":"b0926b4d-4fed-4543-abd8-1e1cc65983f6","Type":"ContainerStarted","Data":"92f4c0ea950a80b0453fad023ad0a3e8117d407b1beb016989bf0da431e67f24"} Feb 16 21:09:44 crc kubenswrapper[4811]: I0216 21:09:44.741967 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:44 crc kubenswrapper[4811]: I0216 21:09:44.788781 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" podStartSLOduration=2.322062884 podStartE2EDuration="8.788753636s" podCreationTimestamp="2026-02-16 21:09:36 +0000 UTC" firstStartedPulling="2026-02-16 21:09:37.69749547 +0000 UTC m=+795.626791418" lastFinishedPulling="2026-02-16 21:09:44.164186222 +0000 UTC m=+802.093482170" observedRunningTime="2026-02-16 21:09:44.782324563 +0000 UTC m=+802.711620551" watchObservedRunningTime="2026-02-16 21:09:44.788753636 +0000 UTC m=+802.718049584" Feb 16 21:09:45 crc kubenswrapper[4811]: I0216 21:09:45.751554 4811 generic.go:334] "Generic (PLEG): container finished" podID="dc88841d-ab30-446e-a4f1-f7e37902c90d" containerID="7d342b9730ffe0e4b05d91d5e0675d537dbcab4b27954bb11ac9a5cebb10a557" exitCode=0 Feb 16 21:09:45 crc kubenswrapper[4811]: I0216 21:09:45.751618 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerDied","Data":"7d342b9730ffe0e4b05d91d5e0675d537dbcab4b27954bb11ac9a5cebb10a557"} Feb 16 21:09:46 crc kubenswrapper[4811]: I0216 21:09:46.765642 4811 generic.go:334] "Generic (PLEG): container finished" podID="dc88841d-ab30-446e-a4f1-f7e37902c90d" containerID="c713364775e7c6ae745e1c515d038b0374a148cd8415048acf3d325f419416d6" exitCode=0 Feb 16 21:09:46 crc kubenswrapper[4811]: I0216 21:09:46.765869 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerDied","Data":"c713364775e7c6ae745e1c515d038b0374a148cd8415048acf3d325f419416d6"} Feb 16 21:09:47 crc kubenswrapper[4811]: I0216 21:09:47.778407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"ae0deef864600bdaae6712092e3e87f8adeff96a0a4da41aa4cad28a2ef46ad8"} Feb 16 21:09:47 crc kubenswrapper[4811]: I0216 21:09:47.778750 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"b58182777bb70c7620f0368efe572276ccd25784301558fde916a8fbd63b308c"} Feb 16 21:09:47 crc kubenswrapper[4811]: I0216 21:09:47.778765 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"cfc89314c1a4f37148a960b1988346bcc13902d90d6241630f6648ebac3d4560"} Feb 16 21:09:47 crc kubenswrapper[4811]: I0216 21:09:47.778778 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"9e786a1e1cf8f2d483929f0cbd5293f572b7c86b37b93b41676f4bb6bcb14eaa"} Feb 16 21:09:47 crc kubenswrapper[4811]: I0216 21:09:47.778787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"a5f4647b8106e7e2d42d5312853d41c749c7935175f2c713512413939f925a16"} Feb 16 21:09:48 crc kubenswrapper[4811]: I0216 21:09:48.342689 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qvj4b" Feb 16 21:09:48 crc kubenswrapper[4811]: I0216 21:09:48.789935 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwqbc" event={"ID":"dc88841d-ab30-446e-a4f1-f7e37902c90d","Type":"ContainerStarted","Data":"cd192f7b02ced2c04eeb38a0c52cf0ed8fbf5e0ba33bf71c7d1c5a6c27b32220"} Feb 16 21:09:48 crc kubenswrapper[4811]: I0216 21:09:48.790216 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:48 crc kubenswrapper[4811]: I0216 21:09:48.823731 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rwqbc" podStartSLOduration=5.566530238 podStartE2EDuration="12.823704439s" podCreationTimestamp="2026-02-16 21:09:36 +0000 UTC" firstStartedPulling="2026-02-16 21:09:36.879317747 +0000 UTC m=+794.808613685" lastFinishedPulling="2026-02-16 21:09:44.136491918 +0000 UTC m=+802.065787886" observedRunningTime="2026-02-16 21:09:48.81664348 +0000 UTC m=+806.745939458" watchObservedRunningTime="2026-02-16 21:09:48.823704439 +0000 UTC m=+806.753000417" Feb 16 21:09:51 crc kubenswrapper[4811]: I0216 21:09:51.783408 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:51 crc kubenswrapper[4811]: I0216 21:09:51.819412 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.724098 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4g8pq"] Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.726183 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.730159 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.730180 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-4dn5s" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.731171 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.737513 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4g8pq"] Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.782285 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgwrm\" (UniqueName: \"kubernetes.io/projected/727ccbcc-fd6b-4e49-9905-1e158605c309-kube-api-access-fgwrm\") pod \"openstack-operator-index-4g8pq\" (UID: \"727ccbcc-fd6b-4e49-9905-1e158605c309\") " pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.883922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgwrm\" (UniqueName: \"kubernetes.io/projected/727ccbcc-fd6b-4e49-9905-1e158605c309-kube-api-access-fgwrm\") pod \"openstack-operator-index-4g8pq\" (UID: \"727ccbcc-fd6b-4e49-9905-1e158605c309\") " pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:09:54 crc kubenswrapper[4811]: I0216 21:09:54.908123 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgwrm\" (UniqueName: \"kubernetes.io/projected/727ccbcc-fd6b-4e49-9905-1e158605c309-kube-api-access-fgwrm\") pod \"openstack-operator-index-4g8pq\" (UID: \"727ccbcc-fd6b-4e49-9905-1e158605c309\") " pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:09:55 crc kubenswrapper[4811]: I0216 21:09:55.052718 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:09:55 crc kubenswrapper[4811]: I0216 21:09:55.482869 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4g8pq"] Feb 16 21:09:55 crc kubenswrapper[4811]: W0216 21:09:55.490112 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727ccbcc_fd6b_4e49_9905_1e158605c309.slice/crio-451edd4692095c0d522c918e8dfa74ecc0bdcaade6bdb23727d5f5551dd21188 WatchSource:0}: Error finding container 451edd4692095c0d522c918e8dfa74ecc0bdcaade6bdb23727d5f5551dd21188: Status 404 returned error can't find the container with id 451edd4692095c0d522c918e8dfa74ecc0bdcaade6bdb23727d5f5551dd21188 Feb 16 21:09:55 crc kubenswrapper[4811]: I0216 21:09:55.876607 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4g8pq" event={"ID":"727ccbcc-fd6b-4e49-9905-1e158605c309","Type":"ContainerStarted","Data":"451edd4692095c0d522c918e8dfa74ecc0bdcaade6bdb23727d5f5551dd21188"} Feb 16 21:09:56 crc kubenswrapper[4811]: I0216 21:09:56.785573 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rwqbc" Feb 16 21:09:56 crc kubenswrapper[4811]: I0216 21:09:56.862352 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-pnpzd" Feb 16 21:09:57 crc kubenswrapper[4811]: I0216 21:09:57.403878 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-5ndl9" Feb 16 21:09:58 crc kubenswrapper[4811]: I0216 21:09:58.905811 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4g8pq" event={"ID":"727ccbcc-fd6b-4e49-9905-1e158605c309","Type":"ContainerStarted","Data":"72f7172da8b0a8b0b48e1afdae9f2992bbe5a973e1347863a5a3cb6ad65a2c3b"} Feb 16 21:09:58 crc kubenswrapper[4811]: I0216 21:09:58.931848 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4g8pq" podStartSLOduration=2.623285287 podStartE2EDuration="4.931820367s" podCreationTimestamp="2026-02-16 21:09:54 +0000 UTC" firstStartedPulling="2026-02-16 21:09:55.495838999 +0000 UTC m=+813.425134937" lastFinishedPulling="2026-02-16 21:09:57.804374069 +0000 UTC m=+815.733670017" observedRunningTime="2026-02-16 21:09:58.925665301 +0000 UTC m=+816.854961269" watchObservedRunningTime="2026-02-16 21:09:58.931820367 +0000 UTC m=+816.861116345" Feb 16 21:10:05 crc kubenswrapper[4811]: I0216 21:10:05.053780 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:10:05 crc kubenswrapper[4811]: I0216 21:10:05.055418 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:10:05 crc kubenswrapper[4811]: I0216 21:10:05.091287 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:10:05 crc kubenswrapper[4811]: I0216 21:10:05.990791 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4g8pq" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.761715 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l"] Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.763739 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.766334 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mg4rc" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.776959 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l"] Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.881041 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-bundle\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.881160 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcv9\" (UniqueName: \"kubernetes.io/projected/aee284bd-e428-4002-aace-b760dbe7acf3-kube-api-access-sfcv9\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.881259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-util\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.982989 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfcv9\" (UniqueName: \"kubernetes.io/projected/aee284bd-e428-4002-aace-b760dbe7acf3-kube-api-access-sfcv9\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.983148 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-util\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.984036 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-util\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.984393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-bundle\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:06 crc kubenswrapper[4811]: I0216 21:10:06.984998 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-bundle\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.004834 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfcv9\" (UniqueName: \"kubernetes.io/projected/aee284bd-e428-4002-aace-b760dbe7acf3-kube-api-access-sfcv9\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.097719 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.373123 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l"] Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.974527 4811 generic.go:334] "Generic (PLEG): container finished" podID="aee284bd-e428-4002-aace-b760dbe7acf3" containerID="7b92ff71c0078649eb752915ca53bdda2c4bcc58c50cb01512ba5ae85507fbc1" exitCode=0 Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.974610 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" event={"ID":"aee284bd-e428-4002-aace-b760dbe7acf3","Type":"ContainerDied","Data":"7b92ff71c0078649eb752915ca53bdda2c4bcc58c50cb01512ba5ae85507fbc1"} Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.974934 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" event={"ID":"aee284bd-e428-4002-aace-b760dbe7acf3","Type":"ContainerStarted","Data":"4df968e76b0ca134eb8da6c5b1a0c529c090f59b5e425b3dc34580549ac7775b"} Feb 16 21:10:07 crc kubenswrapper[4811]: I0216 21:10:07.977509 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:10:08 crc kubenswrapper[4811]: I0216 21:10:08.988111 4811 generic.go:334] "Generic (PLEG): container finished" podID="aee284bd-e428-4002-aace-b760dbe7acf3" containerID="772a13b4b4affdba50e4b106953444b1f806be4830f5ceaceb762b77b1483e84" exitCode=0 Feb 16 21:10:08 crc kubenswrapper[4811]: I0216 21:10:08.988183 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" event={"ID":"aee284bd-e428-4002-aace-b760dbe7acf3","Type":"ContainerDied","Data":"772a13b4b4affdba50e4b106953444b1f806be4830f5ceaceb762b77b1483e84"} Feb 16 21:10:10 crc kubenswrapper[4811]: I0216 21:10:10.003564 4811 generic.go:334] "Generic (PLEG): container finished" podID="aee284bd-e428-4002-aace-b760dbe7acf3" containerID="06af74057e666050fe8ce7f9e3e9ae9becfb9231d32062c5879a4718d9a773aa" exitCode=0 Feb 16 21:10:10 crc kubenswrapper[4811]: I0216 21:10:10.003642 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" event={"ID":"aee284bd-e428-4002-aace-b760dbe7acf3","Type":"ContainerDied","Data":"06af74057e666050fe8ce7f9e3e9ae9becfb9231d32062c5879a4718d9a773aa"} Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.401428 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.455795 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfcv9\" (UniqueName: \"kubernetes.io/projected/aee284bd-e428-4002-aace-b760dbe7acf3-kube-api-access-sfcv9\") pod \"aee284bd-e428-4002-aace-b760dbe7acf3\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.456044 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-util\") pod \"aee284bd-e428-4002-aace-b760dbe7acf3\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.456179 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-bundle\") pod \"aee284bd-e428-4002-aace-b760dbe7acf3\" (UID: \"aee284bd-e428-4002-aace-b760dbe7acf3\") " Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.457459 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-bundle" (OuterVolumeSpecName: "bundle") pod "aee284bd-e428-4002-aace-b760dbe7acf3" (UID: "aee284bd-e428-4002-aace-b760dbe7acf3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.466264 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee284bd-e428-4002-aace-b760dbe7acf3-kube-api-access-sfcv9" (OuterVolumeSpecName: "kube-api-access-sfcv9") pod "aee284bd-e428-4002-aace-b760dbe7acf3" (UID: "aee284bd-e428-4002-aace-b760dbe7acf3"). InnerVolumeSpecName "kube-api-access-sfcv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.490627 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-util" (OuterVolumeSpecName: "util") pod "aee284bd-e428-4002-aace-b760dbe7acf3" (UID: "aee284bd-e428-4002-aace-b760dbe7acf3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.558104 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.558155 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aee284bd-e428-4002-aace-b760dbe7acf3-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:11 crc kubenswrapper[4811]: I0216 21:10:11.558175 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfcv9\" (UniqueName: \"kubernetes.io/projected/aee284bd-e428-4002-aace-b760dbe7acf3-kube-api-access-sfcv9\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:12 crc kubenswrapper[4811]: I0216 21:10:12.022764 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" event={"ID":"aee284bd-e428-4002-aace-b760dbe7acf3","Type":"ContainerDied","Data":"4df968e76b0ca134eb8da6c5b1a0c529c090f59b5e425b3dc34580549ac7775b"} Feb 16 21:10:12 crc kubenswrapper[4811]: I0216 21:10:12.022810 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df968e76b0ca134eb8da6c5b1a0c529c090f59b5e425b3dc34580549ac7775b" Feb 16 21:10:12 crc kubenswrapper[4811]: I0216 21:10:12.022856 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.890007 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p"] Feb 16 21:10:14 crc kubenswrapper[4811]: E0216 21:10:14.890602 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="pull" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.890615 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="pull" Feb 16 21:10:14 crc kubenswrapper[4811]: E0216 21:10:14.890627 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="extract" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.890633 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="extract" Feb 16 21:10:14 crc kubenswrapper[4811]: E0216 21:10:14.890650 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="util" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.890655 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="util" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.890781 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee284bd-e428-4002-aace-b760dbe7acf3" containerName="extract" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.891223 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.895304 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-kn5ls" Feb 16 21:10:14 crc kubenswrapper[4811]: I0216 21:10:14.912141 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p"] Feb 16 21:10:15 crc kubenswrapper[4811]: I0216 21:10:15.024633 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkjmb\" (UniqueName: \"kubernetes.io/projected/83aacf19-18bb-47e5-a94f-2949859ac9a3-kube-api-access-jkjmb\") pod \"openstack-operator-controller-init-7dd97cff99-2dj7p\" (UID: \"83aacf19-18bb-47e5-a94f-2949859ac9a3\") " pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:15 crc kubenswrapper[4811]: I0216 21:10:15.125450 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkjmb\" (UniqueName: \"kubernetes.io/projected/83aacf19-18bb-47e5-a94f-2949859ac9a3-kube-api-access-jkjmb\") pod \"openstack-operator-controller-init-7dd97cff99-2dj7p\" (UID: \"83aacf19-18bb-47e5-a94f-2949859ac9a3\") " pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:15 crc kubenswrapper[4811]: I0216 21:10:15.154412 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkjmb\" (UniqueName: \"kubernetes.io/projected/83aacf19-18bb-47e5-a94f-2949859ac9a3-kube-api-access-jkjmb\") pod \"openstack-operator-controller-init-7dd97cff99-2dj7p\" (UID: \"83aacf19-18bb-47e5-a94f-2949859ac9a3\") " pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:15 crc kubenswrapper[4811]: I0216 21:10:15.209704 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:15 crc kubenswrapper[4811]: I0216 21:10:15.695700 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p"] Feb 16 21:10:16 crc kubenswrapper[4811]: I0216 21:10:16.062642 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" event={"ID":"83aacf19-18bb-47e5-a94f-2949859ac9a3","Type":"ContainerStarted","Data":"0ecb327df3ac458976bea8b7df0680cc590959958b0a137b6e76c4af4afe2a9c"} Feb 16 21:10:20 crc kubenswrapper[4811]: I0216 21:10:20.096536 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" event={"ID":"83aacf19-18bb-47e5-a94f-2949859ac9a3","Type":"ContainerStarted","Data":"3e66e34ef72ff33ded80bf75a032d6b120b5ac7d3b83a6fb75f3e4c548d15063"} Feb 16 21:10:20 crc kubenswrapper[4811]: I0216 21:10:20.097499 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:20 crc kubenswrapper[4811]: I0216 21:10:20.153281 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" podStartSLOduration=2.31732535 podStartE2EDuration="6.153175385s" podCreationTimestamp="2026-02-16 21:10:14 +0000 UTC" firstStartedPulling="2026-02-16 21:10:15.704334179 +0000 UTC m=+833.633630127" lastFinishedPulling="2026-02-16 21:10:19.540184224 +0000 UTC m=+837.469480162" observedRunningTime="2026-02-16 21:10:20.145960502 +0000 UTC m=+838.075256480" watchObservedRunningTime="2026-02-16 21:10:20.153175385 +0000 UTC m=+838.082471353" Feb 16 21:10:25 crc kubenswrapper[4811]: I0216 21:10:25.214052 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-2dj7p" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.213439 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.214785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.217020 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mjd98" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.227618 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.229032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.232785 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5ddf5" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.233906 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.240487 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.241846 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.247655 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tfjpt" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.258277 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.261452 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.305186 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b567s\" (UniqueName: \"kubernetes.io/projected/a930b399-b523-4186-8bf8-c9f071a52b0d-kube-api-access-b567s\") pod \"barbican-operator-controller-manager-868647ff47-4m2sl\" (UID: \"a930b399-b523-4186-8bf8-c9f071a52b0d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.316256 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.317084 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.319526 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-m94nh" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.339875 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-lh792"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.341109 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.346428 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tpckg" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.361001 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.361760 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.363069 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-phpp9" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.368645 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.396919 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.436611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97d65\" (UniqueName: \"kubernetes.io/projected/6abf6059-c304-4c75-b9df-89c83549963c-kube-api-access-97d65\") pod \"cinder-operator-controller-manager-5d946d989d-jckks\" (UID: \"6abf6059-c304-4c75-b9df-89c83549963c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.436691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2b7d\" (UniqueName: \"kubernetes.io/projected/fa86f7ef-e087-4967-acb0-3d5e36d5629e-kube-api-access-c2b7d\") pod \"designate-operator-controller-manager-6d8bf5c495-q89bq\" (UID: \"fa86f7ef-e087-4967-acb0-3d5e36d5629e\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.436756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b567s\" (UniqueName: \"kubernetes.io/projected/a930b399-b523-4186-8bf8-c9f071a52b0d-kube-api-access-b567s\") pod \"barbican-operator-controller-manager-868647ff47-4m2sl\" (UID: \"a930b399-b523-4186-8bf8-c9f071a52b0d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.436811 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd9b\" (UniqueName: \"kubernetes.io/projected/7d5cf64e-0afc-4017-94b9-8fdf40a7cf89-kube-api-access-2kd9b\") pod \"heat-operator-controller-manager-69f49c598c-xsbk9\" (UID: \"7d5cf64e-0afc-4017-94b9-8fdf40a7cf89\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.437759 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.454583 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.471077 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dhvs8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.483914 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b567s\" (UniqueName: \"kubernetes.io/projected/a930b399-b523-4186-8bf8-c9f071a52b0d-kube-api-access-b567s\") pod \"barbican-operator-controller-manager-868647ff47-4m2sl\" (UID: \"a930b399-b523-4186-8bf8-c9f071a52b0d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.528169 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.538386 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-lh792"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.548949 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kd9b\" (UniqueName: \"kubernetes.io/projected/7d5cf64e-0afc-4017-94b9-8fdf40a7cf89-kube-api-access-2kd9b\") pod \"heat-operator-controller-manager-69f49c598c-xsbk9\" (UID: \"7d5cf64e-0afc-4017-94b9-8fdf40a7cf89\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.548961 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.549020 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.549068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97d65\" (UniqueName: \"kubernetes.io/projected/6abf6059-c304-4c75-b9df-89c83549963c-kube-api-access-97d65\") pod \"cinder-operator-controller-manager-5d946d989d-jckks\" (UID: \"6abf6059-c304-4c75-b9df-89c83549963c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.549100 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8lt\" (UniqueName: \"kubernetes.io/projected/6990871b-47ed-4368-a1f2-f582e0c01e81-kube-api-access-ss8lt\") pod \"glance-operator-controller-manager-77987464f4-lh792\" (UID: \"6990871b-47ed-4368-a1f2-f582e0c01e81\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.549120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2b7d\" (UniqueName: \"kubernetes.io/projected/fa86f7ef-e087-4967-acb0-3d5e36d5629e-kube-api-access-c2b7d\") pod \"designate-operator-controller-manager-6d8bf5c495-q89bq\" (UID: \"fa86f7ef-e087-4967-acb0-3d5e36d5629e\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.549142 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pf52\" (UniqueName: \"kubernetes.io/projected/df9a27af-f077-408b-8559-29f9c41b7d78-kube-api-access-8pf52\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.549160 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn74k\" (UniqueName: \"kubernetes.io/projected/9e676231-474a-4831-a71f-7788b6d15f03-kube-api-access-dn74k\") pod \"horizon-operator-controller-manager-5b9b8895d5-cdqwz\" (UID: \"9e676231-474a-4831-a71f-7788b6d15f03\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.555260 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.560330 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.561421 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.564335 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n9qtc" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.569186 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.569447 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97d65\" (UniqueName: \"kubernetes.io/projected/6abf6059-c304-4c75-b9df-89c83549963c-kube-api-access-97d65\") pod \"cinder-operator-controller-manager-5d946d989d-jckks\" (UID: \"6abf6059-c304-4c75-b9df-89c83549963c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.571580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2b7d\" (UniqueName: \"kubernetes.io/projected/fa86f7ef-e087-4967-acb0-3d5e36d5629e-kube-api-access-c2b7d\") pod \"designate-operator-controller-manager-6d8bf5c495-q89bq\" (UID: \"fa86f7ef-e087-4967-acb0-3d5e36d5629e\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.571799 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.573336 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xhwk8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.573843 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kd9b\" (UniqueName: \"kubernetes.io/projected/7d5cf64e-0afc-4017-94b9-8fdf40a7cf89-kube-api-access-2kd9b\") pod \"heat-operator-controller-manager-69f49c598c-xsbk9\" (UID: \"7d5cf64e-0afc-4017-94b9-8fdf40a7cf89\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.578028 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.579410 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.581514 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.581824 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.583804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jdgr8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.592694 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.594462 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.601562 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.619539 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.620888 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.622435 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-z7pkj" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.627295 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.635315 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.636252 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.638973 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mtpls" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.641598 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.650246 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.651436 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.652432 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.652894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfb7d\" (UniqueName: \"kubernetes.io/projected/4959d7c9-42b3-479d-a5d9-f2d2a941b57f-kube-api-access-nfb7d\") pod \"ironic-operator-controller-manager-554564d7fc-rwl5h\" (UID: \"4959d7c9-42b3-479d-a5d9-f2d2a941b57f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.667391 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8lt\" (UniqueName: \"kubernetes.io/projected/6990871b-47ed-4368-a1f2-f582e0c01e81-kube-api-access-ss8lt\") pod \"glance-operator-controller-manager-77987464f4-lh792\" (UID: \"6990871b-47ed-4368-a1f2-f582e0c01e81\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.667445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pf52\" (UniqueName: \"kubernetes.io/projected/df9a27af-f077-408b-8559-29f9c41b7d78-kube-api-access-8pf52\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.667478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn74k\" (UniqueName: \"kubernetes.io/projected/9e676231-474a-4831-a71f-7788b6d15f03-kube-api-access-dn74k\") pod \"horizon-operator-controller-manager-5b9b8895d5-cdqwz\" (UID: \"9e676231-474a-4831-a71f-7788b6d15f03\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.667520 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vjr\" (UniqueName: \"kubernetes.io/projected/8724640a-57a7-402e-9bf8-a40105f068a0-kube-api-access-k4vjr\") pod \"keystone-operator-controller-manager-b4d948c87-ql7fv\" (UID: \"8724640a-57a7-402e-9bf8-a40105f068a0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.667659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:45 crc kubenswrapper[4811]: E0216 21:10:45.667871 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:45 crc kubenswrapper[4811]: E0216 21:10:45.667930 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert podName:df9a27af-f077-408b-8559-29f9c41b7d78 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:46.16790977 +0000 UTC m=+864.097205708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert") pod "infra-operator-controller-manager-79d975b745-qpcgx" (UID: "df9a27af-f077-408b-8559-29f9c41b7d78") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.678257 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z546h" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.682605 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.682640 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.682652 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.683290 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.683399 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.687369 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nkmbn" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.687687 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9njkf" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.687964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pf52\" (UniqueName: \"kubernetes.io/projected/df9a27af-f077-408b-8559-29f9c41b7d78-kube-api-access-8pf52\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.689372 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8lt\" (UniqueName: \"kubernetes.io/projected/6990871b-47ed-4368-a1f2-f582e0c01e81-kube-api-access-ss8lt\") pod \"glance-operator-controller-manager-77987464f4-lh792\" (UID: \"6990871b-47ed-4368-a1f2-f582e0c01e81\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.691622 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn74k\" (UniqueName: \"kubernetes.io/projected/9e676231-474a-4831-a71f-7788b6d15f03-kube-api-access-dn74k\") pod \"horizon-operator-controller-manager-5b9b8895d5-cdqwz\" (UID: \"9e676231-474a-4831-a71f-7788b6d15f03\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.701453 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.717461 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.720591 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.724234 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.726604 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-w2t2q" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.752759 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.757363 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.762150 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-sx755"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.763223 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.766499 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-x2hgx" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.769172 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgt8\" (UniqueName: \"kubernetes.io/projected/4e16647b-338f-45cf-b590-419a41d36314-kube-api-access-dlgt8\") pod \"manila-operator-controller-manager-54f6768c69-8s6fh\" (UID: \"4e16647b-338f-45cf-b590-419a41d36314\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.769236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl62s\" (UniqueName: \"kubernetes.io/projected/f617ca23-fad3-4ff8-9c11-8a0c34458bb0-kube-api-access-bl62s\") pod \"mariadb-operator-controller-manager-6994f66f48-hnr8w\" (UID: \"f617ca23-fad3-4ff8-9c11-8a0c34458bb0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.769262 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mk8x\" (UniqueName: \"kubernetes.io/projected/c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe-kube-api-access-9mk8x\") pod \"neutron-operator-controller-manager-64ddbf8bb-dr4l8\" (UID: \"c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.769304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4vjr\" (UniqueName: \"kubernetes.io/projected/8724640a-57a7-402e-9bf8-a40105f068a0-kube-api-access-k4vjr\") pod \"keystone-operator-controller-manager-b4d948c87-ql7fv\" (UID: \"8724640a-57a7-402e-9bf8-a40105f068a0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.769346 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6zs4\" (UniqueName: \"kubernetes.io/projected/eff6a2d8-85c4-4d00-b10f-f6b8b9266b94-kube-api-access-s6zs4\") pod \"nova-operator-controller-manager-567668f5cf-hkcjr\" (UID: \"eff6a2d8-85c4-4d00-b10f-f6b8b9266b94\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.769435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfb7d\" (UniqueName: \"kubernetes.io/projected/4959d7c9-42b3-479d-a5d9-f2d2a941b57f-kube-api-access-nfb7d\") pod \"ironic-operator-controller-manager-554564d7fc-rwl5h\" (UID: \"4959d7c9-42b3-479d-a5d9-f2d2a941b57f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.775870 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.791040 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.794728 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-sx755"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.810503 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4vjr\" (UniqueName: \"kubernetes.io/projected/8724640a-57a7-402e-9bf8-a40105f068a0-kube-api-access-k4vjr\") pod \"keystone-operator-controller-manager-b4d948c87-ql7fv\" (UID: \"8724640a-57a7-402e-9bf8-a40105f068a0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.815541 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfb7d\" (UniqueName: \"kubernetes.io/projected/4959d7c9-42b3-479d-a5d9-f2d2a941b57f-kube-api-access-nfb7d\") pod \"ironic-operator-controller-manager-554564d7fc-rwl5h\" (UID: \"4959d7c9-42b3-479d-a5d9-f2d2a941b57f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.834815 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fbz66"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.836273 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.838009 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-skf8k" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.865475 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fbz66"] Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgpck\" (UniqueName: \"kubernetes.io/projected/aab10670-0381-43b4-b9a6-e6c1c86fb4a7-kube-api-access-wgpck\") pod \"octavia-operator-controller-manager-69f8888797-5rm5q\" (UID: \"aab10670-0381-43b4-b9a6-e6c1c86fb4a7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871237 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgt8\" (UniqueName: \"kubernetes.io/projected/4e16647b-338f-45cf-b590-419a41d36314-kube-api-access-dlgt8\") pod \"manila-operator-controller-manager-54f6768c69-8s6fh\" (UID: \"4e16647b-338f-45cf-b590-419a41d36314\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl62s\" (UniqueName: \"kubernetes.io/projected/f617ca23-fad3-4ff8-9c11-8a0c34458bb0-kube-api-access-bl62s\") pod \"mariadb-operator-controller-manager-6994f66f48-hnr8w\" (UID: \"f617ca23-fad3-4ff8-9c11-8a0c34458bb0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871291 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mk8x\" (UniqueName: \"kubernetes.io/projected/c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe-kube-api-access-9mk8x\") pod \"neutron-operator-controller-manager-64ddbf8bb-dr4l8\" (UID: \"c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871313 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl9w\" (UniqueName: \"kubernetes.io/projected/81610b83-5cb3-41d5-81c6-a25ed9a86e25-kube-api-access-pxl9w\") pod \"ovn-operator-controller-manager-d44cf6b75-q2fkl\" (UID: \"81610b83-5cb3-41d5-81c6-a25ed9a86e25\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871338 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgz72\" (UniqueName: \"kubernetes.io/projected/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-kube-api-access-lgz72\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871483 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6zs4\" (UniqueName: \"kubernetes.io/projected/eff6a2d8-85c4-4d00-b10f-f6b8b9266b94-kube-api-access-s6zs4\") pod \"nova-operator-controller-manager-567668f5cf-hkcjr\" (UID: \"eff6a2d8-85c4-4d00-b10f-f6b8b9266b94\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871512 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.871692 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8dk\" (UniqueName: \"kubernetes.io/projected/88fe4f2b-4703-4c14-bc5d-c5abfec17e62-kube-api-access-5g8dk\") pod \"placement-operator-controller-manager-8497b45c89-sx755\" (UID: \"88fe4f2b-4703-4c14-bc5d-c5abfec17e62\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.898781 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6zs4\" (UniqueName: \"kubernetes.io/projected/eff6a2d8-85c4-4d00-b10f-f6b8b9266b94-kube-api-access-s6zs4\") pod \"nova-operator-controller-manager-567668f5cf-hkcjr\" (UID: \"eff6a2d8-85c4-4d00-b10f-f6b8b9266b94\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.907550 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl62s\" (UniqueName: \"kubernetes.io/projected/f617ca23-fad3-4ff8-9c11-8a0c34458bb0-kube-api-access-bl62s\") pod \"mariadb-operator-controller-manager-6994f66f48-hnr8w\" (UID: \"f617ca23-fad3-4ff8-9c11-8a0c34458bb0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.912923 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgt8\" (UniqueName: \"kubernetes.io/projected/4e16647b-338f-45cf-b590-419a41d36314-kube-api-access-dlgt8\") pod \"manila-operator-controller-manager-54f6768c69-8s6fh\" (UID: \"4e16647b-338f-45cf-b590-419a41d36314\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.945787 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mk8x\" (UniqueName: \"kubernetes.io/projected/c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe-kube-api-access-9mk8x\") pod \"neutron-operator-controller-manager-64ddbf8bb-dr4l8\" (UID: \"c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.978653 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgpck\" (UniqueName: \"kubernetes.io/projected/aab10670-0381-43b4-b9a6-e6c1c86fb4a7-kube-api-access-wgpck\") pod \"octavia-operator-controller-manager-69f8888797-5rm5q\" (UID: \"aab10670-0381-43b4-b9a6-e6c1c86fb4a7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.978735 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl9w\" (UniqueName: \"kubernetes.io/projected/81610b83-5cb3-41d5-81c6-a25ed9a86e25-kube-api-access-pxl9w\") pod \"ovn-operator-controller-manager-d44cf6b75-q2fkl\" (UID: \"81610b83-5cb3-41d5-81c6-a25ed9a86e25\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.978775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgz72\" (UniqueName: \"kubernetes.io/projected/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-kube-api-access-lgz72\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.978834 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.978876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8dk\" (UniqueName: \"kubernetes.io/projected/88fe4f2b-4703-4c14-bc5d-c5abfec17e62-kube-api-access-5g8dk\") pod \"placement-operator-controller-manager-8497b45c89-sx755\" (UID: \"88fe4f2b-4703-4c14-bc5d-c5abfec17e62\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:10:45 crc kubenswrapper[4811]: I0216 21:10:45.978937 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5mg\" (UniqueName: \"kubernetes.io/projected/017b9ae7-6bf5-4781-a73e-293edb18f921-kube-api-access-vz5mg\") pod \"swift-operator-controller-manager-68f46476f-fbz66\" (UID: \"017b9ae7-6bf5-4781-a73e-293edb18f921\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:10:45 crc kubenswrapper[4811]: E0216 21:10:45.982335 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:45 crc kubenswrapper[4811]: E0216 21:10:45.982428 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert podName:c41c59d7-6daa-4dac-b5f1-22c3886ff6f4 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:46.482403089 +0000 UTC m=+864.411699027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" (UID: "c41c59d7-6daa-4dac-b5f1-22c3886ff6f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.004498 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.006170 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.006521 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8dk\" (UniqueName: \"kubernetes.io/projected/88fe4f2b-4703-4c14-bc5d-c5abfec17e62-kube-api-access-5g8dk\") pod \"placement-operator-controller-manager-8497b45c89-sx755\" (UID: \"88fe4f2b-4703-4c14-bc5d-c5abfec17e62\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.010986 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z77vm" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.012885 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgz72\" (UniqueName: \"kubernetes.io/projected/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-kube-api-access-lgz72\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.016460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgpck\" (UniqueName: \"kubernetes.io/projected/aab10670-0381-43b4-b9a6-e6c1c86fb4a7-kube-api-access-wgpck\") pod \"octavia-operator-controller-manager-69f8888797-5rm5q\" (UID: \"aab10670-0381-43b4-b9a6-e6c1c86fb4a7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.027245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl9w\" (UniqueName: \"kubernetes.io/projected/81610b83-5cb3-41d5-81c6-a25ed9a86e25-kube-api-access-pxl9w\") pod \"ovn-operator-controller-manager-d44cf6b75-q2fkl\" (UID: \"81610b83-5cb3-41d5-81c6-a25ed9a86e25\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.033924 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.044771 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.053146 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.085080 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2n6tm"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.087153 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.088360 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5mg\" (UniqueName: \"kubernetes.io/projected/017b9ae7-6bf5-4781-a73e-293edb18f921-kube-api-access-vz5mg\") pod \"swift-operator-controller-manager-68f46476f-fbz66\" (UID: \"017b9ae7-6bf5-4781-a73e-293edb18f921\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.088606 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.092142 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2dv7v" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.093616 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.107839 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5mg\" (UniqueName: \"kubernetes.io/projected/017b9ae7-6bf5-4781-a73e-293edb18f921-kube-api-access-vz5mg\") pod \"swift-operator-controller-manager-68f46476f-fbz66\" (UID: \"017b9ae7-6bf5-4781-a73e-293edb18f921\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.120403 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.120979 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2n6tm"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.142996 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.160017 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.164271 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.174841 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.174728 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.180618 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-k7h2h" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.187744 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.197754 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmh2\" (UniqueName: \"kubernetes.io/projected/e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231-kube-api-access-gtmh2\") pod \"test-operator-controller-manager-7866795846-2n6tm\" (UID: \"e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.197899 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5w68\" (UniqueName: \"kubernetes.io/projected/21df9513-6f5c-45d7-b7d7-4a901037433a-kube-api-access-z5w68\") pod \"telemetry-operator-controller-manager-7d4dd64c87-cqrfg\" (UID: \"21df9513-6f5c-45d7-b7d7-4a901037433a\") " pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.197967 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.198140 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.198213 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert podName:df9a27af-f077-408b-8559-29f9c41b7d78 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:47.19818228 +0000 UTC m=+865.127478218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert") pod "infra-operator-controller-manager-79d975b745-qpcgx" (UID: "df9a27af-f077-408b-8559-29f9c41b7d78") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.198530 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.199888 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.202067 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.202838 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.202899 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fqh8t" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.210666 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.232221 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.233851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.239787 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dmtdg" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.250251 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.265628 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.282055 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.299079 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5w68\" (UniqueName: \"kubernetes.io/projected/21df9513-6f5c-45d7-b7d7-4a901037433a-kube-api-access-z5w68\") pod \"telemetry-operator-controller-manager-7d4dd64c87-cqrfg\" (UID: \"21df9513-6f5c-45d7-b7d7-4a901037433a\") " pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.299129 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.299228 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpsh\" (UniqueName: \"kubernetes.io/projected/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-kube-api-access-lbpsh\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.299261 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmh2\" (UniqueName: \"kubernetes.io/projected/e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231-kube-api-access-gtmh2\") pod \"test-operator-controller-manager-7866795846-2n6tm\" (UID: \"e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.299286 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8q6m\" (UniqueName: \"kubernetes.io/projected/7f0ce1fe-b0a7-4637-927c-350d6a383cab-kube-api-access-m8q6m\") pod \"watcher-operator-controller-manager-5db88f68c-s84xq\" (UID: \"7f0ce1fe-b0a7-4637-927c-350d6a383cab\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.299318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.304546 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.317605 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5w68\" (UniqueName: \"kubernetes.io/projected/21df9513-6f5c-45d7-b7d7-4a901037433a-kube-api-access-z5w68\") pod \"telemetry-operator-controller-manager-7d4dd64c87-cqrfg\" (UID: \"21df9513-6f5c-45d7-b7d7-4a901037433a\") " pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.322865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmh2\" (UniqueName: \"kubernetes.io/projected/e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231-kube-api-access-gtmh2\") pod \"test-operator-controller-manager-7866795846-2n6tm\" (UID: \"e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.358450 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:10:46 crc kubenswrapper[4811]: W0216 21:10:46.387451 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa86f7ef_e087_4967_acb0_3d5e36d5629e.slice/crio-4c002433482e0812c5a362834299f9d1e27c624b4561955d522c121e774c495c WatchSource:0}: Error finding container 4c002433482e0812c5a362834299f9d1e27c624b4561955d522c121e774c495c: Status 404 returned error can't find the container with id 4c002433482e0812c5a362834299f9d1e27c624b4561955d522c121e774c495c Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.387687 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" event={"ID":"a930b399-b523-4186-8bf8-c9f071a52b0d","Type":"ContainerStarted","Data":"1d3285e47f76e32cde59e2df90232036aafc28566e20302eb8337bb0aa663c56"} Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.396297 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.400740 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.400829 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lfq\" (UniqueName: \"kubernetes.io/projected/0c7bb0d1-f8b1-4e01-8001-354628802f27-kube-api-access-b8lfq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-46224\" (UID: \"0c7bb0d1-f8b1-4e01-8001-354628802f27\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.400858 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbpsh\" (UniqueName: \"kubernetes.io/projected/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-kube-api-access-lbpsh\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.400895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8q6m\" (UniqueName: \"kubernetes.io/projected/7f0ce1fe-b0a7-4637-927c-350d6a383cab-kube-api-access-m8q6m\") pod \"watcher-operator-controller-manager-5db88f68c-s84xq\" (UID: \"7f0ce1fe-b0a7-4637-927c-350d6a383cab\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.400925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.401058 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.401110 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:46.901094994 +0000 UTC m=+864.830390932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.401422 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.401449 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:46.901442303 +0000 UTC m=+864.830738241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "metrics-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.403423 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.422166 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.426686 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8q6m\" (UniqueName: \"kubernetes.io/projected/7f0ce1fe-b0a7-4637-927c-350d6a383cab-kube-api-access-m8q6m\") pod \"watcher-operator-controller-manager-5db88f68c-s84xq\" (UID: \"7f0ce1fe-b0a7-4637-927c-350d6a383cab\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.454121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbpsh\" (UniqueName: \"kubernetes.io/projected/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-kube-api-access-lbpsh\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.503837 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.503973 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lfq\" (UniqueName: \"kubernetes.io/projected/0c7bb0d1-f8b1-4e01-8001-354628802f27-kube-api-access-b8lfq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-46224\" (UID: \"0c7bb0d1-f8b1-4e01-8001-354628802f27\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.504569 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.504629 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert podName:c41c59d7-6daa-4dac-b5f1-22c3886ff6f4 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:47.504609103 +0000 UTC m=+865.433905041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" (UID: "c41c59d7-6daa-4dac-b5f1-22c3886ff6f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.512325 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.526506 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.537637 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-lh792"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.540479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lfq\" (UniqueName: \"kubernetes.io/projected/0c7bb0d1-f8b1-4e01-8001-354628802f27-kube-api-access-b8lfq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-46224\" (UID: \"0c7bb0d1-f8b1-4e01-8001-354628802f27\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.607114 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.649325 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz"] Feb 16 21:10:46 crc kubenswrapper[4811]: W0216 21:10:46.695927 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e676231_474a_4831_a71f_7788b6d15f03.slice/crio-cf5d0d4949aa3a270d568f90d54c44acdbaddf80447a5202b5ce29eee718bea4 WatchSource:0}: Error finding container cf5d0d4949aa3a270d568f90d54c44acdbaddf80447a5202b5ce29eee718bea4: Status 404 returned error can't find the container with id cf5d0d4949aa3a270d568f90d54c44acdbaddf80447a5202b5ce29eee718bea4 Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.858226 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.882172 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h"] Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.925664 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: I0216 21:10:46.926058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.926303 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.926385 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:47.926359696 +0000 UTC m=+865.855655644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "metrics-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.926826 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:10:46 crc kubenswrapper[4811]: E0216 21:10:46.926939 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:47.92691085 +0000 UTC m=+865.856206988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.050188 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w"] Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.076442 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8"] Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.088213 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh"] Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.091988 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e2fb2f_471b_4bf5_a57f_1a175da3c9fe.slice/crio-0445219327ac2c8370246636a58137329b35e06aea95d2bb96a793c39b1e78a9 WatchSource:0}: Error finding container 0445219327ac2c8370246636a58137329b35e06aea95d2bb96a793c39b1e78a9: Status 404 returned error can't find the container with id 0445219327ac2c8370246636a58137329b35e06aea95d2bb96a793c39b1e78a9 Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.093110 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e16647b_338f_45cf_b590_419a41d36314.slice/crio-c4714dc20c4f61f047deba7bff51aafe13d45e88953cc9392c0fc4c660769bf7 WatchSource:0}: Error finding container c4714dc20c4f61f047deba7bff51aafe13d45e88953cc9392c0fc4c660769bf7: Status 404 returned error can't find the container with id c4714dc20c4f61f047deba7bff51aafe13d45e88953cc9392c0fc4c660769bf7 Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.204389 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr"] Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.208668 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff6a2d8_85c4_4d00_b10f_f6b8b9266b94.slice/crio-86c18761cfa666c9d1a13bf873cc11814848f0fb73b2f0ae599f9d27ffc03399 WatchSource:0}: Error finding container 86c18761cfa666c9d1a13bf873cc11814848f0fb73b2f0ae599f9d27ffc03399: Status 404 returned error can't find the container with id 86c18761cfa666c9d1a13bf873cc11814848f0fb73b2f0ae599f9d27ffc03399 Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.230878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.231129 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.231262 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert podName:df9a27af-f077-408b-8559-29f9c41b7d78 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:49.23123588 +0000 UTC m=+867.160531818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert") pod "infra-operator-controller-manager-79d975b745-qpcgx" (UID: "df9a27af-f077-408b-8559-29f9c41b7d78") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.311911 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl"] Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.324777 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fbz66"] Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.343405 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224"] Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.347393 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017b9ae7_6bf5_4781_a73e_293edb18f921.slice/crio-a830b3730aa1f5a5e026cddf2c12e8df96a3e1ebfaac5732927ba0b1e77fffb6 WatchSource:0}: Error finding container a830b3730aa1f5a5e026cddf2c12e8df96a3e1ebfaac5732927ba0b1e77fffb6: Status 404 returned error can't find the container with id a830b3730aa1f5a5e026cddf2c12e8df96a3e1ebfaac5732927ba0b1e77fffb6 Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.349145 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c7bb0d1_f8b1_4e01_8001_354628802f27.slice/crio-0570565ea7a49b421369d9cc97e40598db50bc172dd00b99d268a3159a283434 WatchSource:0}: Error finding container 0570565ea7a49b421369d9cc97e40598db50bc172dd00b99d268a3159a283434: Status 404 returned error can't find the container with id 0570565ea7a49b421369d9cc97e40598db50bc172dd00b99d268a3159a283434 Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.351636 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq"] Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.357680 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-sx755"] Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.358420 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f0ce1fe_b0a7_4637_927c_350d6a383cab.slice/crio-df786bd7eb815f4f6c920dc01ed1a3c9b7d2bbcf600d0d832e2e1433acdadf5a WatchSource:0}: Error finding container df786bd7eb815f4f6c920dc01ed1a3c9b7d2bbcf600d0d832e2e1433acdadf5a: Status 404 returned error can't find the container with id df786bd7eb815f4f6c920dc01ed1a3c9b7d2bbcf600d0d832e2e1433acdadf5a Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.362907 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88fe4f2b_4703_4c14_bc5d_c5abfec17e62.slice/crio-d2a67d391081ce1cb1b0486c46fe40b07061ee9124d9031b6209bd057f41253f WatchSource:0}: Error finding container d2a67d391081ce1cb1b0486c46fe40b07061ee9124d9031b6209bd057f41253f: Status 404 returned error can't find the container with id d2a67d391081ce1cb1b0486c46fe40b07061ee9124d9031b6209bd057f41253f Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.365517 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5g8dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-sx755_openstack-operators(88fe4f2b-4703-4c14-bc5d-c5abfec17e62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.365812 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8q6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-s84xq_openstack-operators(7f0ce1fe-b0a7-4637-927c-350d6a383cab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.368892 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" podUID="88fe4f2b-4703-4c14-bc5d-c5abfec17e62" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.368929 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" podUID="7f0ce1fe-b0a7-4637-927c-350d6a383cab" Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.395425 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" event={"ID":"6abf6059-c304-4c75-b9df-89c83549963c","Type":"ContainerStarted","Data":"047931686b13c658901ffd9b45a0fb52b672ba2fcfb5e16b81d758af60fac1b3"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.396986 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" event={"ID":"017b9ae7-6bf5-4781-a73e-293edb18f921","Type":"ContainerStarted","Data":"a830b3730aa1f5a5e026cddf2c12e8df96a3e1ebfaac5732927ba0b1e77fffb6"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.398340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" event={"ID":"fa86f7ef-e087-4967-acb0-3d5e36d5629e","Type":"ContainerStarted","Data":"4c002433482e0812c5a362834299f9d1e27c624b4561955d522c121e774c495c"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.401981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" event={"ID":"7d5cf64e-0afc-4017-94b9-8fdf40a7cf89","Type":"ContainerStarted","Data":"59d92f9cb0eaabb8cca0fd57bbf2a829a3b90f075a12c4d46c9d59913bc60f92"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.403767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" event={"ID":"9e676231-474a-4831-a71f-7788b6d15f03","Type":"ContainerStarted","Data":"cf5d0d4949aa3a270d568f90d54c44acdbaddf80447a5202b5ce29eee718bea4"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.405480 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" event={"ID":"f617ca23-fad3-4ff8-9c11-8a0c34458bb0","Type":"ContainerStarted","Data":"29f99a0aef3f15a6eec4587d70b3399eb633b8c0b48b25cfa50dd6d3fce856cb"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.407578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" event={"ID":"7f0ce1fe-b0a7-4637-927c-350d6a383cab","Type":"ContainerStarted","Data":"df786bd7eb815f4f6c920dc01ed1a3c9b7d2bbcf600d0d832e2e1433acdadf5a"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.410981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" event={"ID":"4959d7c9-42b3-479d-a5d9-f2d2a941b57f","Type":"ContainerStarted","Data":"48c715317a52117955402eb91338edfe0c8e2440173d3a1621dea2a092bc905a"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.414052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" event={"ID":"c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe","Type":"ContainerStarted","Data":"0445219327ac2c8370246636a58137329b35e06aea95d2bb96a793c39b1e78a9"} Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.415095 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" podUID="7f0ce1fe-b0a7-4637-927c-350d6a383cab" Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.415469 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" event={"ID":"88fe4f2b-4703-4c14-bc5d-c5abfec17e62","Type":"ContainerStarted","Data":"d2a67d391081ce1cb1b0486c46fe40b07061ee9124d9031b6209bd057f41253f"} Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.419089 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" podUID="88fe4f2b-4703-4c14-bc5d-c5abfec17e62" Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.420289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" event={"ID":"4e16647b-338f-45cf-b590-419a41d36314","Type":"ContainerStarted","Data":"c4714dc20c4f61f047deba7bff51aafe13d45e88953cc9392c0fc4c660769bf7"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.421817 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" event={"ID":"6990871b-47ed-4368-a1f2-f582e0c01e81","Type":"ContainerStarted","Data":"065cf79884eba1b17c71eb3445cfe80c0d036004ac01997b0df5e653edc7d2d5"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.422624 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" event={"ID":"81610b83-5cb3-41d5-81c6-a25ed9a86e25","Type":"ContainerStarted","Data":"9503b823314684984f9aa3657e9dc5b46460c582615af3ca7976679bf3fd301b"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.441686 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" event={"ID":"0c7bb0d1-f8b1-4e01-8001-354628802f27","Type":"ContainerStarted","Data":"0570565ea7a49b421369d9cc97e40598db50bc172dd00b99d268a3159a283434"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.448950 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" event={"ID":"8724640a-57a7-402e-9bf8-a40105f068a0","Type":"ContainerStarted","Data":"19dd9da18c5d787fe6ceb02a303cb6d7c4bcb1b846e86deaeb7728dce40f9609"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.450253 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" event={"ID":"eff6a2d8-85c4-4d00-b10f-f6b8b9266b94","Type":"ContainerStarted","Data":"86c18761cfa666c9d1a13bf873cc11814848f0fb73b2f0ae599f9d27ffc03399"} Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.454499 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2n6tm"] Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.467490 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg"] Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.468473 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gtmh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-2n6tm_openstack-operators(e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.469830 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" podUID="e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231" Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.476216 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q"] Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.483610 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaab10670_0381_43b4_b9a6_e6c1c86fb4a7.slice/crio-7747b6a50e9eaeca1e9af8b77e61eea9fdbfebe338833ff36f2613cd63503354 WatchSource:0}: Error finding container 7747b6a50e9eaeca1e9af8b77e61eea9fdbfebe338833ff36f2613cd63503354: Status 404 returned error can't find the container with id 7747b6a50e9eaeca1e9af8b77e61eea9fdbfebe338833ff36f2613cd63503354 Feb 16 21:10:47 crc kubenswrapper[4811]: W0216 21:10:47.511431 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21df9513_6f5c_45d7_b7d7_4a901037433a.slice/crio-2f104e6d51aed279880286f83c39a49cf7eb5a47f9572db689f5048e40ec1e00 WatchSource:0}: Error finding container 2f104e6d51aed279880286f83c39a49cf7eb5a47f9572db689f5048e40ec1e00: Status 404 returned error can't find the container with id 2f104e6d51aed279880286f83c39a49cf7eb5a47f9572db689f5048e40ec1e00 Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.519220 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5w68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7d4dd64c87-cqrfg_openstack-operators(21df9513-6f5c-45d7-b7d7-4a901037433a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.520377 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" podUID="21df9513-6f5c-45d7-b7d7-4a901037433a" Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.539037 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.539681 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.539735 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert podName:c41c59d7-6daa-4dac-b5f1-22c3886ff6f4 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:49.539720376 +0000 UTC m=+867.469016314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" (UID: "c41c59d7-6daa-4dac-b5f1-22c3886ff6f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.944533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:47 crc kubenswrapper[4811]: I0216 21:10:47.945122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.945295 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.945361 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:49.945339279 +0000 UTC m=+867.874635237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "webhook-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.946019 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:10:47 crc kubenswrapper[4811]: E0216 21:10:47.946064 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:49.946052847 +0000 UTC m=+867.875348795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "metrics-server-cert" not found Feb 16 21:10:48 crc kubenswrapper[4811]: I0216 21:10:48.466962 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" event={"ID":"aab10670-0381-43b4-b9a6-e6c1c86fb4a7","Type":"ContainerStarted","Data":"7747b6a50e9eaeca1e9af8b77e61eea9fdbfebe338833ff36f2613cd63503354"} Feb 16 21:10:48 crc kubenswrapper[4811]: I0216 21:10:48.480724 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" event={"ID":"e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231","Type":"ContainerStarted","Data":"cfe3cb56c873c99a4a5b5bc8e3b48d0c50ba2c34e0a42242fb6bd8434aeb2d75"} Feb 16 21:10:48 crc kubenswrapper[4811]: E0216 21:10:48.484978 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" podUID="e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231" Feb 16 21:10:48 crc kubenswrapper[4811]: I0216 21:10:48.485741 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" event={"ID":"21df9513-6f5c-45d7-b7d7-4a901037433a","Type":"ContainerStarted","Data":"2f104e6d51aed279880286f83c39a49cf7eb5a47f9572db689f5048e40ec1e00"} Feb 16 21:10:48 crc kubenswrapper[4811]: E0216 21:10:48.488070 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" podUID="88fe4f2b-4703-4c14-bc5d-c5abfec17e62" Feb 16 21:10:48 crc kubenswrapper[4811]: E0216 21:10:48.488134 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" podUID="21df9513-6f5c-45d7-b7d7-4a901037433a" Feb 16 21:10:48 crc kubenswrapper[4811]: E0216 21:10:48.495648 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" podUID="7f0ce1fe-b0a7-4637-927c-350d6a383cab" Feb 16 21:10:49 crc kubenswrapper[4811]: I0216 21:10:49.272298 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.273069 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.273145 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert podName:df9a27af-f077-408b-8559-29f9c41b7d78 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:53.273122597 +0000 UTC m=+871.202418535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert") pod "infra-operator-controller-manager-79d975b745-qpcgx" (UID: "df9a27af-f077-408b-8559-29f9c41b7d78") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.494334 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" podUID="e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231" Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.495488 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" podUID="21df9513-6f5c-45d7-b7d7-4a901037433a" Feb 16 21:10:49 crc kubenswrapper[4811]: I0216 21:10:49.577969 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.578109 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.578179 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert podName:c41c59d7-6daa-4dac-b5f1-22c3886ff6f4 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:53.578163885 +0000 UTC m=+871.507459823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" (UID: "c41c59d7-6daa-4dac-b5f1-22c3886ff6f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: I0216 21:10:49.986688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:49 crc kubenswrapper[4811]: I0216 21:10:49.986783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.986883 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.986980 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.986996 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:53.986969619 +0000 UTC m=+871.916265587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "webhook-server-cert" not found Feb 16 21:10:49 crc kubenswrapper[4811]: E0216 21:10:49.987030 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:10:53.987016341 +0000 UTC m=+871.916312299 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "metrics-server-cert" not found Feb 16 21:10:53 crc kubenswrapper[4811]: I0216 21:10:53.346044 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:10:53 crc kubenswrapper[4811]: E0216 21:10:53.346260 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:53 crc kubenswrapper[4811]: E0216 21:10:53.346463 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert podName:df9a27af-f077-408b-8559-29f9c41b7d78 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:01.346446264 +0000 UTC m=+879.275742202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert") pod "infra-operator-controller-manager-79d975b745-qpcgx" (UID: "df9a27af-f077-408b-8559-29f9c41b7d78") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:10:53 crc kubenswrapper[4811]: I0216 21:10:53.649275 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:10:53 crc kubenswrapper[4811]: E0216 21:10:53.649421 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:53 crc kubenswrapper[4811]: E0216 21:10:53.649506 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert podName:c41c59d7-6daa-4dac-b5f1-22c3886ff6f4 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:01.649489132 +0000 UTC m=+879.578785070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" (UID: "c41c59d7-6daa-4dac-b5f1-22c3886ff6f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:10:54 crc kubenswrapper[4811]: I0216 21:10:54.055102 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:54 crc kubenswrapper[4811]: I0216 21:10:54.055188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:10:54 crc kubenswrapper[4811]: E0216 21:10:54.055311 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:10:54 crc kubenswrapper[4811]: E0216 21:10:54.055389 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:02.055371042 +0000 UTC m=+879.984666980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "webhook-server-cert" not found Feb 16 21:10:54 crc kubenswrapper[4811]: E0216 21:10:54.055400 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:10:54 crc kubenswrapper[4811]: E0216 21:10:54.055613 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:02.055581497 +0000 UTC m=+879.984877475 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "metrics-server-cert" not found Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.378507 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.379093 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgpck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-5rm5q_openstack-operators(aab10670-0381-43b4-b9a6-e6c1c86fb4a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.381315 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" podUID="aab10670-0381-43b4-b9a6-e6c1c86fb4a7" Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.591993 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" podUID="aab10670-0381-43b4-b9a6-e6c1c86fb4a7" Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.971059 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.971481 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vz5mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-fbz66_openstack-operators(017b9ae7-6bf5-4781-a73e-293edb18f921): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:10:59 crc kubenswrapper[4811]: E0216 21:10:59.973232 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" podUID="017b9ae7-6bf5-4781-a73e-293edb18f921" Feb 16 21:11:00 crc kubenswrapper[4811]: E0216 21:11:00.359288 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 21:11:00 crc kubenswrapper[4811]: E0216 21:11:00.359470 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8lfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-46224_openstack-operators(0c7bb0d1-f8b1-4e01-8001-354628802f27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:11:00 crc kubenswrapper[4811]: E0216 21:11:00.360826 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" podUID="0c7bb0d1-f8b1-4e01-8001-354628802f27" Feb 16 21:11:00 crc kubenswrapper[4811]: E0216 21:11:00.597959 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" podUID="0c7bb0d1-f8b1-4e01-8001-354628802f27" Feb 16 21:11:00 crc kubenswrapper[4811]: E0216 21:11:00.600266 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" podUID="017b9ae7-6bf5-4781-a73e-293edb18f921" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.305965 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.306169 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6zs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-hkcjr_openstack-operators(eff6a2d8-85c4-4d00-b10f-f6b8b9266b94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.307417 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" podUID="eff6a2d8-85c4-4d00-b10f-f6b8b9266b94" Feb 16 21:11:01 crc kubenswrapper[4811]: I0216 21:11:01.394714 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:11:01 crc kubenswrapper[4811]: I0216 21:11:01.404639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9a27af-f077-408b-8559-29f9c41b7d78-cert\") pod \"infra-operator-controller-manager-79d975b745-qpcgx\" (UID: \"df9a27af-f077-408b-8559-29f9c41b7d78\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:11:01 crc kubenswrapper[4811]: I0216 21:11:01.436258 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.605562 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" podUID="eff6a2d8-85c4-4d00-b10f-f6b8b9266b94" Feb 16 21:11:01 crc kubenswrapper[4811]: I0216 21:11:01.700662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.700960 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.701043 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert podName:c41c59d7-6daa-4dac-b5f1-22c3886ff6f4 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:17.701015389 +0000 UTC m=+895.630311337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" (UID: "c41c59d7-6daa-4dac-b5f1-22c3886ff6f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.965471 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.965849 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4vjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-ql7fv_openstack-operators(8724640a-57a7-402e-9bf8-a40105f068a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:11:01 crc kubenswrapper[4811]: E0216 21:11:01.967631 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" podUID="8724640a-57a7-402e-9bf8-a40105f068a0" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.106625 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.106694 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:02 crc kubenswrapper[4811]: E0216 21:11:02.106799 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:11:02 crc kubenswrapper[4811]: E0216 21:11:02.106862 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs podName:e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:18.106845208 +0000 UTC m=+896.036141146 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-v9slw" (UID: "e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2") : secret "webhook-server-cert" not found Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.117824 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.394964 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx"] Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.619804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" event={"ID":"4e16647b-338f-45cf-b590-419a41d36314","Type":"ContainerStarted","Data":"ef7f6b190572814b61b39420b888b721a92f84ce95440c0bdee2f20349c3682b"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.619958 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.625957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" event={"ID":"fa86f7ef-e087-4967-acb0-3d5e36d5629e","Type":"ContainerStarted","Data":"5810457a3f089617cd167151f7898bac4fd93969931325ad7cbf1c567fae259f"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.626353 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.642846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" event={"ID":"f617ca23-fad3-4ff8-9c11-8a0c34458bb0","Type":"ContainerStarted","Data":"632d839ac442719cbd04f75308cdf9c97c59eb086e2c71a470e1ba80ed5abdf4"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.642960 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.644772 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" event={"ID":"7d5cf64e-0afc-4017-94b9-8fdf40a7cf89","Type":"ContainerStarted","Data":"764b6492e0d425c42984baa4652b8991a78d9f0e1c4af071aa1f999a742f33ca"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.644875 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.651518 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" event={"ID":"9e676231-474a-4831-a71f-7788b6d15f03","Type":"ContainerStarted","Data":"17a92fe71c63d98b2e9a5683b47d85a3545e99baf63767fd4415bd2a9851cd07"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.651682 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.653470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" event={"ID":"df9a27af-f077-408b-8559-29f9c41b7d78","Type":"ContainerStarted","Data":"cf9b39a314bbcc7169606bd516c24a5ba8d5906951cdb0495fb69744c0574ba0"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.658417 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" podStartSLOduration=2.774437388 podStartE2EDuration="17.658399348s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.096885767 +0000 UTC m=+865.026181705" lastFinishedPulling="2026-02-16 21:11:01.980847717 +0000 UTC m=+879.910143665" observedRunningTime="2026-02-16 21:11:02.657780962 +0000 UTC m=+880.587076900" watchObservedRunningTime="2026-02-16 21:11:02.658399348 +0000 UTC m=+880.587695286" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.661226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" event={"ID":"6abf6059-c304-4c75-b9df-89c83549963c","Type":"ContainerStarted","Data":"b595fed0e942ca731084f7772b2724af64f14a6b3f2b54c750b0e7ad6247c40a"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.661317 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.676363 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" event={"ID":"a930b399-b523-4186-8bf8-c9f071a52b0d","Type":"ContainerStarted","Data":"db3151782a307c4be7b1ca5f42e694a5427d73c0fc8dfc95173fcbb58bbfc37f"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.677090 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.683792 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" event={"ID":"6990871b-47ed-4368-a1f2-f582e0c01e81","Type":"ContainerStarted","Data":"019efdc3cdf8ca9286001fd23a3774fca9afc9f5e86e983a270696a78d1c0055"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.684428 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.697173 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" podStartSLOduration=2.228118512 podStartE2EDuration="17.697156953s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.4804646 +0000 UTC m=+864.409760538" lastFinishedPulling="2026-02-16 21:11:01.949503031 +0000 UTC m=+879.878798979" observedRunningTime="2026-02-16 21:11:02.691311704 +0000 UTC m=+880.620607632" watchObservedRunningTime="2026-02-16 21:11:02.697156953 +0000 UTC m=+880.626452891" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.727866 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" podStartSLOduration=2.3700817069999998 podStartE2EDuration="17.727848292s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.591339186 +0000 UTC m=+864.520635124" lastFinishedPulling="2026-02-16 21:11:01.949105751 +0000 UTC m=+879.878401709" observedRunningTime="2026-02-16 21:11:02.718602347 +0000 UTC m=+880.647898285" watchObservedRunningTime="2026-02-16 21:11:02.727848292 +0000 UTC m=+880.657144230" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.728043 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" event={"ID":"81610b83-5cb3-41d5-81c6-a25ed9a86e25","Type":"ContainerStarted","Data":"36c284ef8e5e4e66e79c6d82aebb1a58ecb40592e6ab99663a34ce5f1b5e1934"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.728092 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.740368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" event={"ID":"c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe","Type":"ContainerStarted","Data":"e19ed7276130adc59ddadd188a97c675425e2bb224e1fc97615b7f6f0c7449d8"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.740935 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.751512 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" podStartSLOduration=3.1971293960000002 podStartE2EDuration="17.751496133s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.72531494 +0000 UTC m=+864.654610878" lastFinishedPulling="2026-02-16 21:11:01.279681677 +0000 UTC m=+879.208977615" observedRunningTime="2026-02-16 21:11:02.751458772 +0000 UTC m=+880.680754710" watchObservedRunningTime="2026-02-16 21:11:02.751496133 +0000 UTC m=+880.680792071" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.771798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" event={"ID":"4959d7c9-42b3-479d-a5d9-f2d2a941b57f","Type":"ContainerStarted","Data":"4ff4cf33ce8e6cc7fa2d6097160fe5a8b74627f5a4354d0c0ffe3404ca330f3e"} Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.771869 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:11:02 crc kubenswrapper[4811]: E0216 21:11:02.782176 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" podUID="8724640a-57a7-402e-9bf8-a40105f068a0" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.788379 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" podStartSLOduration=2.910992306 podStartE2EDuration="17.788362669s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.070995919 +0000 UTC m=+865.000291857" lastFinishedPulling="2026-02-16 21:11:01.948366282 +0000 UTC m=+879.877662220" observedRunningTime="2026-02-16 21:11:02.779388991 +0000 UTC m=+880.708684919" watchObservedRunningTime="2026-02-16 21:11:02.788362669 +0000 UTC m=+880.717658607" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.856120 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" podStartSLOduration=2.711669015 podStartE2EDuration="17.8560847s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.134955714 +0000 UTC m=+864.064251652" lastFinishedPulling="2026-02-16 21:11:01.279371399 +0000 UTC m=+879.208667337" observedRunningTime="2026-02-16 21:11:02.848026615 +0000 UTC m=+880.777322553" watchObservedRunningTime="2026-02-16 21:11:02.8560847 +0000 UTC m=+880.785380638" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.888007 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" podStartSLOduration=3.546435938 podStartE2EDuration="17.887985s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.937821167 +0000 UTC m=+864.867117115" lastFinishedPulling="2026-02-16 21:11:01.279370239 +0000 UTC m=+879.208666177" observedRunningTime="2026-02-16 21:11:02.885180629 +0000 UTC m=+880.814476567" watchObservedRunningTime="2026-02-16 21:11:02.887985 +0000 UTC m=+880.817280928" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.909290 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" podStartSLOduration=3.042288151 podStartE2EDuration="17.90926488s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.094486426 +0000 UTC m=+865.023782364" lastFinishedPulling="2026-02-16 21:11:01.961463155 +0000 UTC m=+879.890759093" observedRunningTime="2026-02-16 21:11:02.906563072 +0000 UTC m=+880.835859000" watchObservedRunningTime="2026-02-16 21:11:02.90926488 +0000 UTC m=+880.838560818" Feb 16 21:11:02 crc kubenswrapper[4811]: I0216 21:11:02.936446 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" podStartSLOduration=2.636627208 podStartE2EDuration="17.93642077s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.649960435 +0000 UTC m=+864.579256373" lastFinishedPulling="2026-02-16 21:11:01.949753987 +0000 UTC m=+879.879049935" observedRunningTime="2026-02-16 21:11:02.932811189 +0000 UTC m=+880.862107127" watchObservedRunningTime="2026-02-16 21:11:02.93642077 +0000 UTC m=+880.865716708" Feb 16 21:11:03 crc kubenswrapper[4811]: I0216 21:11:03.009947 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" podStartSLOduration=3.378142713 podStartE2EDuration="18.009914187s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.332461791 +0000 UTC m=+865.261757729" lastFinishedPulling="2026-02-16 21:11:01.964233265 +0000 UTC m=+879.893529203" observedRunningTime="2026-02-16 21:11:03.008742297 +0000 UTC m=+880.938038245" watchObservedRunningTime="2026-02-16 21:11:03.009914187 +0000 UTC m=+880.939210135" Feb 16 21:11:03 crc kubenswrapper[4811]: I0216 21:11:03.051183 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" podStartSLOduration=2.583335475 podStartE2EDuration="18.051159085s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.480016709 +0000 UTC m=+864.409312647" lastFinishedPulling="2026-02-16 21:11:01.947840309 +0000 UTC m=+879.877136257" observedRunningTime="2026-02-16 21:11:03.046557408 +0000 UTC m=+880.975853346" watchObservedRunningTime="2026-02-16 21:11:03.051159085 +0000 UTC m=+880.980455043" Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.832690 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" event={"ID":"88fe4f2b-4703-4c14-bc5d-c5abfec17e62","Type":"ContainerStarted","Data":"aa087b52b69d088ada5a89a870ec9e7e29e22e9d085633eecf679f2090f33600"} Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.834370 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.836744 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" event={"ID":"df9a27af-f077-408b-8559-29f9c41b7d78","Type":"ContainerStarted","Data":"63d33f8f5d5427e61c5824d522532a973374f6e8936307e91e2306c837a3f8ee"} Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.836875 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.838368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" event={"ID":"e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231","Type":"ContainerStarted","Data":"2b2034ec7e84130619e47b42b89a4e392aada34f5591e91f25e8f7d19226ce72"} Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.838556 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.850647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" event={"ID":"21df9513-6f5c-45d7-b7d7-4a901037433a","Type":"ContainerStarted","Data":"74e5f9e51582782299f3ddab6bf2430a01fff4a583ce989f33d78e8700f3ee33"} Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.851637 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.852935 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" event={"ID":"7f0ce1fe-b0a7-4637-927c-350d6a383cab","Type":"ContainerStarted","Data":"f05c47a0bc567a5afb352c26868251b5fb694d42bead5074975b8f374c3d9137"} Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.872958 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" podStartSLOduration=2.855506997 podStartE2EDuration="24.872928396s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.365379927 +0000 UTC m=+865.294675865" lastFinishedPulling="2026-02-16 21:11:09.382801316 +0000 UTC m=+887.312097264" observedRunningTime="2026-02-16 21:11:09.867668732 +0000 UTC m=+887.796964710" watchObservedRunningTime="2026-02-16 21:11:09.872928396 +0000 UTC m=+887.802224354" Feb 16 21:11:09 crc kubenswrapper[4811]: I0216 21:11:09.995435 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" podStartSLOduration=2.9863399 podStartE2EDuration="24.995411057s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.365727436 +0000 UTC m=+865.295023374" lastFinishedPulling="2026-02-16 21:11:09.374798593 +0000 UTC m=+887.304094531" observedRunningTime="2026-02-16 21:11:09.930236852 +0000 UTC m=+887.859532800" watchObservedRunningTime="2026-02-16 21:11:09.995411057 +0000 UTC m=+887.924707005" Feb 16 21:11:10 crc kubenswrapper[4811]: I0216 21:11:10.032065 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" podStartSLOduration=3.168703313 podStartE2EDuration="25.032046078s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.519088252 +0000 UTC m=+865.448384190" lastFinishedPulling="2026-02-16 21:11:09.382431027 +0000 UTC m=+887.311726955" observedRunningTime="2026-02-16 21:11:10.000110547 +0000 UTC m=+887.929406505" watchObservedRunningTime="2026-02-16 21:11:10.032046078 +0000 UTC m=+887.961342016" Feb 16 21:11:10 crc kubenswrapper[4811]: I0216 21:11:10.100401 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" podStartSLOduration=3.155638761 podStartE2EDuration="25.100386354s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.468335962 +0000 UTC m=+865.397631900" lastFinishedPulling="2026-02-16 21:11:09.413083555 +0000 UTC m=+887.342379493" observedRunningTime="2026-02-16 21:11:10.032771166 +0000 UTC m=+887.962067104" watchObservedRunningTime="2026-02-16 21:11:10.100386354 +0000 UTC m=+888.029682292" Feb 16 21:11:10 crc kubenswrapper[4811]: I0216 21:11:10.100902 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" podStartSLOduration=18.139682954 podStartE2EDuration="25.100897957s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:11:02.423018399 +0000 UTC m=+880.352314337" lastFinishedPulling="2026-02-16 21:11:09.384233402 +0000 UTC m=+887.313529340" observedRunningTime="2026-02-16 21:11:10.097183732 +0000 UTC m=+888.026479670" watchObservedRunningTime="2026-02-16 21:11:10.100897957 +0000 UTC m=+888.030193895" Feb 16 21:11:12 crc kubenswrapper[4811]: I0216 21:11:12.878609 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" event={"ID":"017b9ae7-6bf5-4781-a73e-293edb18f921","Type":"ContainerStarted","Data":"dac7d630693cda2e98e9cf5fc86085ae50581e8dc3cc76bceec305a71e2ff993"} Feb 16 21:11:15 crc kubenswrapper[4811]: I0216 21:11:15.554050 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4m2sl" Feb 16 21:11:15 crc kubenswrapper[4811]: I0216 21:11:15.587863 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jckks" Feb 16 21:11:15 crc kubenswrapper[4811]: I0216 21:11:15.598623 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-q89bq" Feb 16 21:11:15 crc kubenswrapper[4811]: I0216 21:11:15.704955 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsbk9" Feb 16 21:11:15 crc kubenswrapper[4811]: I0216 21:11:15.779069 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-lh792" Feb 16 21:11:15 crc kubenswrapper[4811]: I0216 21:11:15.795255 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cdqwz" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.039090 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rwl5h" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.091753 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8s6fh" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.096744 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hnr8w" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.126829 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-dr4l8" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.185325 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q2fkl" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.284835 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sx755" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.361346 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-cqrfg" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.425987 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-2n6tm" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.527571 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:11:16 crc kubenswrapper[4811]: I0216 21:11:16.530513 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s84xq" Feb 16 21:11:17 crc kubenswrapper[4811]: I0216 21:11:17.775061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:11:17 crc kubenswrapper[4811]: I0216 21:11:17.781673 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c41c59d7-6daa-4dac-b5f1-22c3886ff6f4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt\" (UID: \"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.018105 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.179350 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.191831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-v9slw\" (UID: \"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.360139 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.363621 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.363695 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.521333 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt"] Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.888891 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw"] Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.936436 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" event={"ID":"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2","Type":"ContainerStarted","Data":"ceaef3150cfd5b6ee4b98b59cf2c586c137a6c5dad660ed3589db42f6c6b516b"} Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.940944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" event={"ID":"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4","Type":"ContainerStarted","Data":"89abd3b8ae1a62d9743ce9bd931de4af46a5ed6fcb757e34f4551fe4d94f98f6"} Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.941387 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.945885 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" Feb 16 21:11:18 crc kubenswrapper[4811]: I0216 21:11:18.960159 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fbz66" podStartSLOduration=9.192766388 podStartE2EDuration="33.960139999s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.350067678 +0000 UTC m=+865.279363616" lastFinishedPulling="2026-02-16 21:11:12.117441299 +0000 UTC m=+890.046737227" observedRunningTime="2026-02-16 21:11:18.958669573 +0000 UTC m=+896.887965541" watchObservedRunningTime="2026-02-16 21:11:18.960139999 +0000 UTC m=+896.889435937" Feb 16 21:11:19 crc kubenswrapper[4811]: I0216 21:11:19.950871 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" event={"ID":"e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2","Type":"ContainerStarted","Data":"7dd7ec9b4299dfa7c0e10e165962159d4a2b60a0ca6f1bc503d7d05808f95c0c"} Feb 16 21:11:19 crc kubenswrapper[4811]: I0216 21:11:19.951281 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:19 crc kubenswrapper[4811]: I0216 21:11:19.997036 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" podStartSLOduration=34.997008099 podStartE2EDuration="34.997008099s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:11:19.987458225 +0000 UTC m=+897.916754263" watchObservedRunningTime="2026-02-16 21:11:19.997008099 +0000 UTC m=+897.926304077" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.263182 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dtjrg"] Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.266235 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.280477 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtjrg"] Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.329870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-utilities\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.329904 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-catalog-content\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.329920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwjl\" (UniqueName: \"kubernetes.io/projected/8445767d-7cea-4819-810c-9d73f978becf-kube-api-access-wqwjl\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.430841 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-utilities\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.430875 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-catalog-content\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.430895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqwjl\" (UniqueName: \"kubernetes.io/projected/8445767d-7cea-4819-810c-9d73f978becf-kube-api-access-wqwjl\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.431454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-catalog-content\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.431460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-utilities\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.443060 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qpcgx" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.453206 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqwjl\" (UniqueName: \"kubernetes.io/projected/8445767d-7cea-4819-810c-9d73f978becf-kube-api-access-wqwjl\") pod \"redhat-operators-dtjrg\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.612013 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.974566 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" event={"ID":"8724640a-57a7-402e-9bf8-a40105f068a0","Type":"ContainerStarted","Data":"c11c98ea30b772bccf4c6f216e2b97ecd84d1539c7fe9ad0c39bdb5682838e04"} Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.975976 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.978646 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" event={"ID":"eff6a2d8-85c4-4d00-b10f-f6b8b9266b94","Type":"ContainerStarted","Data":"28c4939e6bb13064941f16fb2d7e9396d977cea4dec92c7cdb9d359c96098a07"} Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.979243 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.987212 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" event={"ID":"0c7bb0d1-f8b1-4e01-8001-354628802f27","Type":"ContainerStarted","Data":"5345f867439bf6b870734b360addbf5b7e24806e3ac3265146a6a2d0eb94df17"} Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.993214 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" event={"ID":"aab10670-0381-43b4-b9a6-e6c1c86fb4a7","Type":"ContainerStarted","Data":"ce3b6b13595cb1fcc4ed011d89d4d77985ac86bb018eb21d19b199dfdd85897a"} Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.993786 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:11:21 crc kubenswrapper[4811]: I0216 21:11:21.997776 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" podStartSLOduration=3.142934695 podStartE2EDuration="36.99776232s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:46.93284088 +0000 UTC m=+864.862136818" lastFinishedPulling="2026-02-16 21:11:20.787668495 +0000 UTC m=+898.716964443" observedRunningTime="2026-02-16 21:11:21.990999194 +0000 UTC m=+899.920295132" watchObservedRunningTime="2026-02-16 21:11:21.99776232 +0000 UTC m=+899.927058258" Feb 16 21:11:22 crc kubenswrapper[4811]: I0216 21:11:22.009389 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" podStartSLOduration=3.438790891 podStartE2EDuration="37.009375905s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.217700586 +0000 UTC m=+865.146996524" lastFinishedPulling="2026-02-16 21:11:20.78828557 +0000 UTC m=+898.717581538" observedRunningTime="2026-02-16 21:11:22.009249282 +0000 UTC m=+899.938545220" watchObservedRunningTime="2026-02-16 21:11:22.009375905 +0000 UTC m=+899.938671843" Feb 16 21:11:22 crc kubenswrapper[4811]: I0216 21:11:22.027661 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46224" podStartSLOduration=2.594048102 podStartE2EDuration="36.027642854s" podCreationTimestamp="2026-02-16 21:10:46 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.357903307 +0000 UTC m=+865.287199235" lastFinishedPulling="2026-02-16 21:11:20.791498059 +0000 UTC m=+898.720793987" observedRunningTime="2026-02-16 21:11:22.023363689 +0000 UTC m=+899.952659627" watchObservedRunningTime="2026-02-16 21:11:22.027642854 +0000 UTC m=+899.956938792" Feb 16 21:11:22 crc kubenswrapper[4811]: I0216 21:11:22.040863 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" podStartSLOduration=3.738390395 podStartE2EDuration="37.040843898s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:10:47.485734034 +0000 UTC m=+865.415029972" lastFinishedPulling="2026-02-16 21:11:20.788187527 +0000 UTC m=+898.717483475" observedRunningTime="2026-02-16 21:11:22.036955282 +0000 UTC m=+899.966251210" watchObservedRunningTime="2026-02-16 21:11:22.040843898 +0000 UTC m=+899.970139846" Feb 16 21:11:22 crc kubenswrapper[4811]: I0216 21:11:22.072667 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtjrg"] Feb 16 21:11:22 crc kubenswrapper[4811]: W0216 21:11:22.082726 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8445767d_7cea_4819_810c_9d73f978becf.slice/crio-08e3af66fe00335b3d60db2cd49084b14835735dc002ff63ad1794a35deafb16 WatchSource:0}: Error finding container 08e3af66fe00335b3d60db2cd49084b14835735dc002ff63ad1794a35deafb16: Status 404 returned error can't find the container with id 08e3af66fe00335b3d60db2cd49084b14835735dc002ff63ad1794a35deafb16 Feb 16 21:11:23 crc kubenswrapper[4811]: I0216 21:11:23.003694 4811 generic.go:334] "Generic (PLEG): container finished" podID="8445767d-7cea-4819-810c-9d73f978becf" containerID="d3d70663b667540dee4f0c1ccd1cac5d88700bcf3c29653e8e63eb5c8d838edb" exitCode=0 Feb 16 21:11:23 crc kubenswrapper[4811]: I0216 21:11:23.003988 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerDied","Data":"d3d70663b667540dee4f0c1ccd1cac5d88700bcf3c29653e8e63eb5c8d838edb"} Feb 16 21:11:23 crc kubenswrapper[4811]: I0216 21:11:23.005340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerStarted","Data":"08e3af66fe00335b3d60db2cd49084b14835735dc002ff63ad1794a35deafb16"} Feb 16 21:11:24 crc kubenswrapper[4811]: I0216 21:11:24.015283 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" event={"ID":"c41c59d7-6daa-4dac-b5f1-22c3886ff6f4","Type":"ContainerStarted","Data":"7b1a4df312414812e16f5c8b47d5358caf2629af0980be3fac63c590f6ee26f9"} Feb 16 21:11:24 crc kubenswrapper[4811]: I0216 21:11:24.056671 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" podStartSLOduration=34.123447078 podStartE2EDuration="39.056644917s" podCreationTimestamp="2026-02-16 21:10:45 +0000 UTC" firstStartedPulling="2026-02-16 21:11:18.528047258 +0000 UTC m=+896.457343196" lastFinishedPulling="2026-02-16 21:11:23.461245097 +0000 UTC m=+901.390541035" observedRunningTime="2026-02-16 21:11:24.051395008 +0000 UTC m=+901.980691016" watchObservedRunningTime="2026-02-16 21:11:24.056644917 +0000 UTC m=+901.985940885" Feb 16 21:11:25 crc kubenswrapper[4811]: I0216 21:11:25.026367 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerStarted","Data":"bacdf95721781e0c2610010ec8d348c4302be53c3b99cacd47a001199074af72"} Feb 16 21:11:25 crc kubenswrapper[4811]: I0216 21:11:25.026551 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:11:26 crc kubenswrapper[4811]: I0216 21:11:26.037507 4811 generic.go:334] "Generic (PLEG): container finished" podID="8445767d-7cea-4819-810c-9d73f978becf" containerID="bacdf95721781e0c2610010ec8d348c4302be53c3b99cacd47a001199074af72" exitCode=0 Feb 16 21:11:26 crc kubenswrapper[4811]: I0216 21:11:26.040266 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerDied","Data":"bacdf95721781e0c2610010ec8d348c4302be53c3b99cacd47a001199074af72"} Feb 16 21:11:26 crc kubenswrapper[4811]: I0216 21:11:26.058323 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ql7fv" Feb 16 21:11:26 crc kubenswrapper[4811]: I0216 21:11:26.146494 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-hkcjr" Feb 16 21:11:26 crc kubenswrapper[4811]: I0216 21:11:26.194220 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-5rm5q" Feb 16 21:11:27 crc kubenswrapper[4811]: I0216 21:11:27.049862 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerStarted","Data":"3133885bddbbeace1e56264dbd5bde55cce674cb3d7f65712683c8fa4a5cf12b"} Feb 16 21:11:27 crc kubenswrapper[4811]: I0216 21:11:27.073652 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dtjrg" podStartSLOduration=3.051693706 podStartE2EDuration="6.073625912s" podCreationTimestamp="2026-02-16 21:11:21 +0000 UTC" firstStartedPulling="2026-02-16 21:11:23.412627403 +0000 UTC m=+901.341923361" lastFinishedPulling="2026-02-16 21:11:26.434559619 +0000 UTC m=+904.363855567" observedRunningTime="2026-02-16 21:11:27.069349437 +0000 UTC m=+904.998645385" watchObservedRunningTime="2026-02-16 21:11:27.073625912 +0000 UTC m=+905.002921890" Feb 16 21:11:28 crc kubenswrapper[4811]: I0216 21:11:28.026917 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt" Feb 16 21:11:28 crc kubenswrapper[4811]: I0216 21:11:28.368159 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-v9slw" Feb 16 21:11:31 crc kubenswrapper[4811]: I0216 21:11:31.612550 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:31 crc kubenswrapper[4811]: I0216 21:11:31.612600 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:32 crc kubenswrapper[4811]: I0216 21:11:32.684908 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtjrg" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="registry-server" probeResult="failure" output=< Feb 16 21:11:32 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 21:11:32 crc kubenswrapper[4811]: > Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.021627 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f46jc"] Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.024649 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.031128 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f46jc"] Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.182999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-utilities\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.183100 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4nw\" (UniqueName: \"kubernetes.io/projected/95041e72-7998-4ea4-a9d9-185f246fcc70-kube-api-access-mm4nw\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.183143 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-catalog-content\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.284809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm4nw\" (UniqueName: \"kubernetes.io/projected/95041e72-7998-4ea4-a9d9-185f246fcc70-kube-api-access-mm4nw\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.285099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-catalog-content\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.285242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-utilities\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.285873 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-utilities\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.286221 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-catalog-content\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.303887 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm4nw\" (UniqueName: \"kubernetes.io/projected/95041e72-7998-4ea4-a9d9-185f246fcc70-kube-api-access-mm4nw\") pod \"redhat-marketplace-f46jc\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.350893 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:38 crc kubenswrapper[4811]: I0216 21:11:38.616374 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f46jc"] Feb 16 21:11:39 crc kubenswrapper[4811]: I0216 21:11:39.147968 4811 generic.go:334] "Generic (PLEG): container finished" podID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerID="1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f" exitCode=0 Feb 16 21:11:39 crc kubenswrapper[4811]: I0216 21:11:39.149327 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f46jc" event={"ID":"95041e72-7998-4ea4-a9d9-185f246fcc70","Type":"ContainerDied","Data":"1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f"} Feb 16 21:11:39 crc kubenswrapper[4811]: I0216 21:11:39.149578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f46jc" event={"ID":"95041e72-7998-4ea4-a9d9-185f246fcc70","Type":"ContainerStarted","Data":"e8a42541a1de460c7239f8c69c4f8692478029fcbe1405c9e4ff8cb0e7c934f3"} Feb 16 21:11:40 crc kubenswrapper[4811]: I0216 21:11:40.155909 4811 generic.go:334] "Generic (PLEG): container finished" podID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerID="9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f" exitCode=0 Feb 16 21:11:40 crc kubenswrapper[4811]: I0216 21:11:40.155969 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f46jc" event={"ID":"95041e72-7998-4ea4-a9d9-185f246fcc70","Type":"ContainerDied","Data":"9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f"} Feb 16 21:11:41 crc kubenswrapper[4811]: I0216 21:11:41.170733 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f46jc" event={"ID":"95041e72-7998-4ea4-a9d9-185f246fcc70","Type":"ContainerStarted","Data":"fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6"} Feb 16 21:11:41 crc kubenswrapper[4811]: I0216 21:11:41.195217 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f46jc" podStartSLOduration=2.804724736 podStartE2EDuration="4.195182466s" podCreationTimestamp="2026-02-16 21:11:37 +0000 UTC" firstStartedPulling="2026-02-16 21:11:39.150052252 +0000 UTC m=+917.079348200" lastFinishedPulling="2026-02-16 21:11:40.540509972 +0000 UTC m=+918.469805930" observedRunningTime="2026-02-16 21:11:41.193486303 +0000 UTC m=+919.122782241" watchObservedRunningTime="2026-02-16 21:11:41.195182466 +0000 UTC m=+919.124478404" Feb 16 21:11:41 crc kubenswrapper[4811]: I0216 21:11:41.685480 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:41 crc kubenswrapper[4811]: I0216 21:11:41.727667 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:43 crc kubenswrapper[4811]: I0216 21:11:43.389303 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtjrg"] Feb 16 21:11:43 crc kubenswrapper[4811]: I0216 21:11:43.389814 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dtjrg" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="registry-server" containerID="cri-o://3133885bddbbeace1e56264dbd5bde55cce674cb3d7f65712683c8fa4a5cf12b" gracePeriod=2 Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.199668 4811 generic.go:334] "Generic (PLEG): container finished" podID="8445767d-7cea-4819-810c-9d73f978becf" containerID="3133885bddbbeace1e56264dbd5bde55cce674cb3d7f65712683c8fa4a5cf12b" exitCode=0 Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.199721 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerDied","Data":"3133885bddbbeace1e56264dbd5bde55cce674cb3d7f65712683c8fa4a5cf12b"} Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.390725 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.495137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-utilities\") pod \"8445767d-7cea-4819-810c-9d73f978becf\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.495214 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqwjl\" (UniqueName: \"kubernetes.io/projected/8445767d-7cea-4819-810c-9d73f978becf-kube-api-access-wqwjl\") pod \"8445767d-7cea-4819-810c-9d73f978becf\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.495327 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-catalog-content\") pod \"8445767d-7cea-4819-810c-9d73f978becf\" (UID: \"8445767d-7cea-4819-810c-9d73f978becf\") " Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.496135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-utilities" (OuterVolumeSpecName: "utilities") pod "8445767d-7cea-4819-810c-9d73f978becf" (UID: "8445767d-7cea-4819-810c-9d73f978becf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.502001 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8445767d-7cea-4819-810c-9d73f978becf-kube-api-access-wqwjl" (OuterVolumeSpecName: "kube-api-access-wqwjl") pod "8445767d-7cea-4819-810c-9d73f978becf" (UID: "8445767d-7cea-4819-810c-9d73f978becf"). InnerVolumeSpecName "kube-api-access-wqwjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.597566 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.597624 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqwjl\" (UniqueName: \"kubernetes.io/projected/8445767d-7cea-4819-810c-9d73f978becf-kube-api-access-wqwjl\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.628508 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8445767d-7cea-4819-810c-9d73f978becf" (UID: "8445767d-7cea-4819-810c-9d73f978becf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:11:44 crc kubenswrapper[4811]: I0216 21:11:44.698875 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8445767d-7cea-4819-810c-9d73f978becf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.213015 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtjrg" event={"ID":"8445767d-7cea-4819-810c-9d73f978becf","Type":"ContainerDied","Data":"08e3af66fe00335b3d60db2cd49084b14835735dc002ff63ad1794a35deafb16"} Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.213109 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtjrg" Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.213592 4811 scope.go:117] "RemoveContainer" containerID="3133885bddbbeace1e56264dbd5bde55cce674cb3d7f65712683c8fa4a5cf12b" Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.244854 4811 scope.go:117] "RemoveContainer" containerID="bacdf95721781e0c2610010ec8d348c4302be53c3b99cacd47a001199074af72" Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.249116 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtjrg"] Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.271314 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dtjrg"] Feb 16 21:11:45 crc kubenswrapper[4811]: I0216 21:11:45.292382 4811 scope.go:117] "RemoveContainer" containerID="d3d70663b667540dee4f0c1ccd1cac5d88700bcf3c29653e8e63eb5c8d838edb" Feb 16 21:11:46 crc kubenswrapper[4811]: I0216 21:11:46.712502 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8445767d-7cea-4819-810c-9d73f978becf" path="/var/lib/kubelet/pods/8445767d-7cea-4819-810c-9d73f978becf/volumes" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.404158 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gpdwb"] Feb 16 21:11:47 crc kubenswrapper[4811]: E0216 21:11:47.404542 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="registry-server" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.404565 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="registry-server" Feb 16 21:11:47 crc kubenswrapper[4811]: E0216 21:11:47.404591 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="extract-content" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.404600 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="extract-content" Feb 16 21:11:47 crc kubenswrapper[4811]: E0216 21:11:47.404611 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="extract-utilities" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.404620 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="extract-utilities" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.404806 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8445767d-7cea-4819-810c-9d73f978becf" containerName="registry-server" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.406082 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.420655 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpdwb"] Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.539428 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4p6s\" (UniqueName: \"kubernetes.io/projected/e6100253-d183-48b3-bcdf-f193f07d42a1-kube-api-access-t4p6s\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.539509 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-catalog-content\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.539528 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-utilities\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.640872 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4p6s\" (UniqueName: \"kubernetes.io/projected/e6100253-d183-48b3-bcdf-f193f07d42a1-kube-api-access-t4p6s\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.640960 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-utilities\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.640996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-catalog-content\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.641537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-utilities\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.641591 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-catalog-content\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.666257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4p6s\" (UniqueName: \"kubernetes.io/projected/e6100253-d183-48b3-bcdf-f193f07d42a1-kube-api-access-t4p6s\") pod \"community-operators-gpdwb\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:47 crc kubenswrapper[4811]: I0216 21:11:47.722717 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.123883 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpdwb"] Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.171693 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8rqf7"] Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.173459 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.179553 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.179706 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-cck2t" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.179836 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.180394 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.185545 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8rqf7"] Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.255478 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpdwb" event={"ID":"e6100253-d183-48b3-bcdf-f193f07d42a1","Type":"ContainerStarted","Data":"d337e5ecee55ab30330dd42df914f83ecf61b5300b50c924b7cc6ff6508fda0d"} Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.265780 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9vqg2"] Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.268669 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.271005 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.326177 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9vqg2"] Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.351897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-config\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.351963 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586092f5-7d82-4113-b7be-2753a057b7f6-config\") pod \"dnsmasq-dns-675f4bcbfc-8rqf7\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.351984 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.352361 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.352415 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.364776 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7nvd\" (UniqueName: \"kubernetes.io/projected/ff07a920-37d9-4e47-b0ed-a7319602bc75-kube-api-access-b7nvd\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.364846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv6gv\" (UniqueName: \"kubernetes.io/projected/586092f5-7d82-4113-b7be-2753a057b7f6-kube-api-access-dv6gv\") pod \"dnsmasq-dns-675f4bcbfc-8rqf7\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.372519 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.372589 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.419504 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.466433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7nvd\" (UniqueName: \"kubernetes.io/projected/ff07a920-37d9-4e47-b0ed-a7319602bc75-kube-api-access-b7nvd\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.466487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv6gv\" (UniqueName: \"kubernetes.io/projected/586092f5-7d82-4113-b7be-2753a057b7f6-kube-api-access-dv6gv\") pod \"dnsmasq-dns-675f4bcbfc-8rqf7\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.466519 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-config\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.466557 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586092f5-7d82-4113-b7be-2753a057b7f6-config\") pod \"dnsmasq-dns-675f4bcbfc-8rqf7\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.466572 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.467443 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.467561 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-config\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.467594 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586092f5-7d82-4113-b7be-2753a057b7f6-config\") pod \"dnsmasq-dns-675f4bcbfc-8rqf7\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.485966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv6gv\" (UniqueName: \"kubernetes.io/projected/586092f5-7d82-4113-b7be-2753a057b7f6-kube-api-access-dv6gv\") pod \"dnsmasq-dns-675f4bcbfc-8rqf7\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.486044 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7nvd\" (UniqueName: \"kubernetes.io/projected/ff07a920-37d9-4e47-b0ed-a7319602bc75-kube-api-access-b7nvd\") pod \"dnsmasq-dns-78dd6ddcc-9vqg2\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.512682 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.589936 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:11:48 crc kubenswrapper[4811]: I0216 21:11:48.736776 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8rqf7"] Feb 16 21:11:49 crc kubenswrapper[4811]: I0216 21:11:49.134168 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9vqg2"] Feb 16 21:11:49 crc kubenswrapper[4811]: W0216 21:11:49.140101 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff07a920_37d9_4e47_b0ed_a7319602bc75.slice/crio-a1a453cb6642304602c44e667d5cb921199afc9f25ff2990652b1fd3de7c273c WatchSource:0}: Error finding container a1a453cb6642304602c44e667d5cb921199afc9f25ff2990652b1fd3de7c273c: Status 404 returned error can't find the container with id a1a453cb6642304602c44e667d5cb921199afc9f25ff2990652b1fd3de7c273c Feb 16 21:11:49 crc kubenswrapper[4811]: I0216 21:11:49.265825 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" event={"ID":"ff07a920-37d9-4e47-b0ed-a7319602bc75","Type":"ContainerStarted","Data":"a1a453cb6642304602c44e667d5cb921199afc9f25ff2990652b1fd3de7c273c"} Feb 16 21:11:49 crc kubenswrapper[4811]: I0216 21:11:49.267629 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerID="2617a2dd076da2c786dc73abb7c04aa4c0680eecb952c0151b4f94f24e482620" exitCode=0 Feb 16 21:11:49 crc kubenswrapper[4811]: I0216 21:11:49.267689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpdwb" event={"ID":"e6100253-d183-48b3-bcdf-f193f07d42a1","Type":"ContainerDied","Data":"2617a2dd076da2c786dc73abb7c04aa4c0680eecb952c0151b4f94f24e482620"} Feb 16 21:11:49 crc kubenswrapper[4811]: I0216 21:11:49.269146 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" event={"ID":"586092f5-7d82-4113-b7be-2753a057b7f6","Type":"ContainerStarted","Data":"8ac089884a1d6f985d3652058de22a736d4cf0b2be180779954cdd4377390572"} Feb 16 21:11:49 crc kubenswrapper[4811]: I0216 21:11:49.322117 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.794932 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f46jc"] Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.800122 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8rqf7"] Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.821452 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncf7"] Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.824108 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.828790 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncf7"] Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.908008 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.908093 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8ls\" (UniqueName: \"kubernetes.io/projected/400bc5f6-6b87-4af8-9fa9-4429afb77168-kube-api-access-jl8ls\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:50 crc kubenswrapper[4811]: I0216 21:11:50.908166 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-config\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.014279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.014606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl8ls\" (UniqueName: \"kubernetes.io/projected/400bc5f6-6b87-4af8-9fa9-4429afb77168-kube-api-access-jl8ls\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.014640 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-config\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.015489 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-config\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.015926 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.042110 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl8ls\" (UniqueName: \"kubernetes.io/projected/400bc5f6-6b87-4af8-9fa9-4429afb77168-kube-api-access-jl8ls\") pod \"dnsmasq-dns-666b6646f7-gncf7\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.069526 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9vqg2"] Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.091067 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-94vfs"] Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.092304 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.107100 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-94vfs"] Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.149779 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.218385 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-config\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.218472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff2sm\" (UniqueName: \"kubernetes.io/projected/285b4d00-7d22-44c0-8a35-6f076f3135a7-kube-api-access-ff2sm\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.218541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.302434 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerID="fa7618e858d0b78093ba5d2170f36c52af83dc4d1c4f5b96af8ecbb0a229e136" exitCode=0 Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.302627 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f46jc" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="registry-server" containerID="cri-o://fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6" gracePeriod=2 Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.304607 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpdwb" event={"ID":"e6100253-d183-48b3-bcdf-f193f07d42a1","Type":"ContainerDied","Data":"fa7618e858d0b78093ba5d2170f36c52af83dc4d1c4f5b96af8ecbb0a229e136"} Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.320272 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.320630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-config\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.320674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff2sm\" (UniqueName: \"kubernetes.io/projected/285b4d00-7d22-44c0-8a35-6f076f3135a7-kube-api-access-ff2sm\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.321933 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.322826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-config\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.348831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff2sm\" (UniqueName: \"kubernetes.io/projected/285b4d00-7d22-44c0-8a35-6f076f3135a7-kube-api-access-ff2sm\") pod \"dnsmasq-dns-57d769cc4f-94vfs\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.419077 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.694424 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncf7"] Feb 16 21:11:51 crc kubenswrapper[4811]: W0216 21:11:51.746470 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod400bc5f6_6b87_4af8_9fa9_4429afb77168.slice/crio-63d86e7c7025eeeee4043f03ef5e96097a80512fad50e854ce8e736f7f1dab16 WatchSource:0}: Error finding container 63d86e7c7025eeeee4043f03ef5e96097a80512fad50e854ce8e736f7f1dab16: Status 404 returned error can't find the container with id 63d86e7c7025eeeee4043f03ef5e96097a80512fad50e854ce8e736f7f1dab16 Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.749890 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.946750 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-utilities\") pod \"95041e72-7998-4ea4-a9d9-185f246fcc70\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.946811 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-catalog-content\") pod \"95041e72-7998-4ea4-a9d9-185f246fcc70\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.946894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm4nw\" (UniqueName: \"kubernetes.io/projected/95041e72-7998-4ea4-a9d9-185f246fcc70-kube-api-access-mm4nw\") pod \"95041e72-7998-4ea4-a9d9-185f246fcc70\" (UID: \"95041e72-7998-4ea4-a9d9-185f246fcc70\") " Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.947831 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-utilities" (OuterVolumeSpecName: "utilities") pod "95041e72-7998-4ea4-a9d9-185f246fcc70" (UID: "95041e72-7998-4ea4-a9d9-185f246fcc70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.956915 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95041e72-7998-4ea4-a9d9-185f246fcc70-kube-api-access-mm4nw" (OuterVolumeSpecName: "kube-api-access-mm4nw") pod "95041e72-7998-4ea4-a9d9-185f246fcc70" (UID: "95041e72-7998-4ea4-a9d9-185f246fcc70"). InnerVolumeSpecName "kube-api-access-mm4nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.956974 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-94vfs"] Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.967020 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:11:51 crc kubenswrapper[4811]: E0216 21:11:51.967360 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="extract-content" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.967372 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="extract-content" Feb 16 21:11:51 crc kubenswrapper[4811]: E0216 21:11:51.967391 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="registry-server" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.967396 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="registry-server" Feb 16 21:11:51 crc kubenswrapper[4811]: E0216 21:11:51.967416 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="extract-utilities" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.967422 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="extract-utilities" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.967613 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerName="registry-server" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.968515 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.970240 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fhdrk" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.972923 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.973108 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.973210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.972939 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.973412 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.973494 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.977552 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:11:51 crc kubenswrapper[4811]: I0216 21:11:51.987262 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95041e72-7998-4ea4-a9d9-185f246fcc70" (UID: "95041e72-7998-4ea4-a9d9-185f246fcc70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.049951 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm4nw\" (UniqueName: \"kubernetes.io/projected/95041e72-7998-4ea4-a9d9-185f246fcc70-kube-api-access-mm4nw\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.049990 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.050000 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95041e72-7998-4ea4-a9d9-185f246fcc70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152347 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd541633-15e7-4a12-99a4-72637521386d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152373 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfqln\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-kube-api-access-bfqln\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152435 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152551 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-config-data\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152574 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152624 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd541633-15e7-4a12-99a4-72637521386d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152664 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-250c48e0-f760-488e-b490-898832d4a33f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-250c48e0-f760-488e-b490-898832d4a33f\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152706 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152725 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.152781 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.238643 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.239882 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.242247 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.244440 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.244490 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.244564 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.244655 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.244666 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-47r6h" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.244848 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254225 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-250c48e0-f760-488e-b490-898832d4a33f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-250c48e0-f760-488e-b490-898832d4a33f\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254303 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254331 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254362 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd541633-15e7-4a12-99a4-72637521386d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254413 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfqln\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-kube-api-access-bfqln\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254444 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254491 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-config-data\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.254546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd541633-15e7-4a12-99a4-72637521386d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.255950 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.256696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.257142 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-config-data\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.257270 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.257322 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd541633-15e7-4a12-99a4-72637521386d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.258512 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd541633-15e7-4a12-99a4-72637521386d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.258977 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd541633-15e7-4a12-99a4-72637521386d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.260611 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.261568 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.261970 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.274141 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.274181 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-250c48e0-f760-488e-b490-898832d4a33f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-250c48e0-f760-488e-b490-898832d4a33f\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af4e577dc605cd416eca06f6478e9ce7c89130d13dde3be9e71f468afb200071/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.275041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfqln\" (UniqueName: \"kubernetes.io/projected/cd541633-15e7-4a12-99a4-72637521386d-kube-api-access-bfqln\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.324235 4811 generic.go:334] "Generic (PLEG): container finished" podID="95041e72-7998-4ea4-a9d9-185f246fcc70" containerID="fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6" exitCode=0 Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.324328 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f46jc" event={"ID":"95041e72-7998-4ea4-a9d9-185f246fcc70","Type":"ContainerDied","Data":"fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6"} Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.324392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f46jc" event={"ID":"95041e72-7998-4ea4-a9d9-185f246fcc70","Type":"ContainerDied","Data":"e8a42541a1de460c7239f8c69c4f8692478029fcbe1405c9e4ff8cb0e7c934f3"} Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.324411 4811 scope.go:117] "RemoveContainer" containerID="fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.324342 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f46jc" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.328506 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-250c48e0-f760-488e-b490-898832d4a33f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-250c48e0-f760-488e-b490-898832d4a33f\") pod \"rabbitmq-server-0\" (UID: \"cd541633-15e7-4a12-99a4-72637521386d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.349589 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpdwb" event={"ID":"e6100253-d183-48b3-bcdf-f193f07d42a1","Type":"ContainerStarted","Data":"e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c"} Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.351309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" event={"ID":"285b4d00-7d22-44c0-8a35-6f076f3135a7","Type":"ContainerStarted","Data":"f3fcf4fdcdc8332229b71aba0d1c5258531ab1ce82794a78949d91287477bfa3"} Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.353156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" event={"ID":"400bc5f6-6b87-4af8-9fa9-4429afb77168","Type":"ContainerStarted","Data":"63d86e7c7025eeeee4043f03ef5e96097a80512fad50e854ce8e736f7f1dab16"} Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.361718 4811 scope.go:117] "RemoveContainer" containerID="9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.368929 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.368984 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40263486-d6cd-4aa0-9570-affea970096f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369110 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckqb\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-kube-api-access-2ckqb\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369139 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369156 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40263486-d6cd-4aa0-9570-affea970096f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369173 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369189 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.369223 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.372022 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gpdwb" podStartSLOduration=2.949624888 podStartE2EDuration="5.37199888s" podCreationTimestamp="2026-02-16 21:11:47 +0000 UTC" firstStartedPulling="2026-02-16 21:11:49.26936816 +0000 UTC m=+927.198664098" lastFinishedPulling="2026-02-16 21:11:51.691742152 +0000 UTC m=+929.621038090" observedRunningTime="2026-02-16 21:11:52.367808655 +0000 UTC m=+930.297104593" watchObservedRunningTime="2026-02-16 21:11:52.37199888 +0000 UTC m=+930.301294828" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.438448 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f46jc"] Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.446250 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f46jc"] Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.467763 4811 scope.go:117] "RemoveContainer" containerID="1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.471987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckqb\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-kube-api-access-2ckqb\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472110 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40263486-d6cd-4aa0-9570-affea970096f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472144 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472227 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472314 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40263486-d6cd-4aa0-9570-affea970096f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.472342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.473612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.473686 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.474701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.474976 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.476475 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.477349 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b88ec3ab373229a973cc3d291dbdb72e764e24c3c38bf9773ddc57b955a69b1e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.477847 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40263486-d6cd-4aa0-9570-affea970096f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.486680 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40263486-d6cd-4aa0-9570-affea970096f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.491839 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckqb\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-kube-api-access-2ckqb\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.492854 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40263486-d6cd-4aa0-9570-affea970096f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.493022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.512513 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40263486-d6cd-4aa0-9570-affea970096f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.564246 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1a24d15-0f33-4da4-af7e-80b619d294c4\") pod \"rabbitmq-cell1-server-0\" (UID: \"40263486-d6cd-4aa0-9570-affea970096f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.594600 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.604505 4811 scope.go:117] "RemoveContainer" containerID="fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6" Feb 16 21:11:52 crc kubenswrapper[4811]: E0216 21:11:52.604857 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6\": container with ID starting with fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6 not found: ID does not exist" containerID="fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.604884 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6"} err="failed to get container status \"fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6\": rpc error: code = NotFound desc = could not find container \"fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6\": container with ID starting with fa396acdd3b1ca49a596d6e58115359e56bbda6b50f543ad36ca98f28cb4d4d6 not found: ID does not exist" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.604903 4811 scope.go:117] "RemoveContainer" containerID="9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f" Feb 16 21:11:52 crc kubenswrapper[4811]: E0216 21:11:52.605172 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f\": container with ID starting with 9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f not found: ID does not exist" containerID="9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.605190 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f"} err="failed to get container status \"9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f\": rpc error: code = NotFound desc = could not find container \"9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f\": container with ID starting with 9f12e9d3983755e113653ca90cd2f7b4ec11dfe2bf419a1ff1bfb3e9a8f6486f not found: ID does not exist" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.605217 4811 scope.go:117] "RemoveContainer" containerID="1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f" Feb 16 21:11:52 crc kubenswrapper[4811]: E0216 21:11:52.605719 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f\": container with ID starting with 1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f not found: ID does not exist" containerID="1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.605737 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f"} err="failed to get container status \"1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f\": rpc error: code = NotFound desc = could not find container \"1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f\": container with ID starting with 1e15fc989da382e495cf87505b0a6642309c700a453f3b7a178dbc89aa75419f not found: ID does not exist" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.717801 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95041e72-7998-4ea4-a9d9-185f246fcc70" path="/var/lib/kubelet/pods/95041e72-7998-4ea4-a9d9-185f246fcc70/volumes" Feb 16 21:11:52 crc kubenswrapper[4811]: I0216 21:11:52.860166 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.141294 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:11:53 crc kubenswrapper[4811]: W0216 21:11:53.148694 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd541633_15e7_4a12_99a4_72637521386d.slice/crio-8080b16eaa9ecf2ac0d1e1da9465a6ca71a2ae0190ce4573c8b07c5bc1753325 WatchSource:0}: Error finding container 8080b16eaa9ecf2ac0d1e1da9465a6ca71a2ae0190ce4573c8b07c5bc1753325: Status 404 returned error can't find the container with id 8080b16eaa9ecf2ac0d1e1da9465a6ca71a2ae0190ce4573c8b07c5bc1753325 Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.307604 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.381978 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cd541633-15e7-4a12-99a4-72637521386d","Type":"ContainerStarted","Data":"8080b16eaa9ecf2ac0d1e1da9465a6ca71a2ae0190ce4573c8b07c5bc1753325"} Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.580700 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.582077 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.623834 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.623856 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.624326 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bvstb" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.624516 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.637719 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.658500 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721682 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32a12c18-c799-4092-8ba9-c89b2a5f713a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721759 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-config-data-default\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721824 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721850 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-kolla-config\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721875 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32a12c18-c799-4092-8ba9-c89b2a5f713a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721902 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9nvt\" (UniqueName: \"kubernetes.io/projected/32a12c18-c799-4092-8ba9-c89b2a5f713a-kube-api-access-x9nvt\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.721920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a12c18-c799-4092-8ba9-c89b2a5f713a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.822851 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9nvt\" (UniqueName: \"kubernetes.io/projected/32a12c18-c799-4092-8ba9-c89b2a5f713a-kube-api-access-x9nvt\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.822906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a12c18-c799-4092-8ba9-c89b2a5f713a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.822969 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32a12c18-c799-4092-8ba9-c89b2a5f713a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.823020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-config-data-default\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.823043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.823070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.823086 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-kolla-config\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.823112 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32a12c18-c799-4092-8ba9-c89b2a5f713a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.823556 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32a12c18-c799-4092-8ba9-c89b2a5f713a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.824570 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-config-data-default\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.825760 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-kolla-config\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.826841 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a12c18-c799-4092-8ba9-c89b2a5f713a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.829821 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.829858 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f84778a2abed1319d1c6f397e3d4b318615d4dc4df4eaa6e18f376c794269f4/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.829929 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a12c18-c799-4092-8ba9-c89b2a5f713a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.834173 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32a12c18-c799-4092-8ba9-c89b2a5f713a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.844289 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9nvt\" (UniqueName: \"kubernetes.io/projected/32a12c18-c799-4092-8ba9-c89b2a5f713a-kube-api-access-x9nvt\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.886436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18f727e1-278e-4516-a4fb-c7620da6bdad\") pod \"openstack-galera-0\" (UID: \"32a12c18-c799-4092-8ba9-c89b2a5f713a\") " pod="openstack/openstack-galera-0" Feb 16 21:11:53 crc kubenswrapper[4811]: I0216 21:11:53.958479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.744271 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.757851 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.757978 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.783132 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.783418 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.783568 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pjgdl" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.783889 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.815544 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.816781 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.820826 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.821243 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-np7hd" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.821398 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.829167 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.841859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10a8f77b-e218-4975-9411-8c380eda2c5a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.841913 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8f77b-e218-4975-9411-8c380eda2c5a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.841939 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.841958 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.842004 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.842019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.842072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwtzd\" (UniqueName: \"kubernetes.io/projected/10a8f77b-e218-4975-9411-8c380eda2c5a-kube-api-access-rwtzd\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.842088 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a8f77b-e218-4975-9411-8c380eda2c5a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943277 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a8f77b-e218-4975-9411-8c380eda2c5a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943353 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10a8f77b-e218-4975-9411-8c380eda2c5a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943396 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8f77b-e218-4975-9411-8c380eda2c5a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943414 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943465 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4kc\" (UniqueName: \"kubernetes.io/projected/211f2606-1d07-4c2d-8533-d53495a99d5b-kube-api-access-9l4kc\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943484 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/211f2606-1d07-4c2d-8533-d53495a99d5b-kolla-config\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943505 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/211f2606-1d07-4c2d-8533-d53495a99d5b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943573 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211f2606-1d07-4c2d-8533-d53495a99d5b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943606 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211f2606-1d07-4c2d-8533-d53495a99d5b-config-data\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943627 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwtzd\" (UniqueName: \"kubernetes.io/projected/10a8f77b-e218-4975-9411-8c380eda2c5a-kube-api-access-rwtzd\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.943875 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10a8f77b-e218-4975-9411-8c380eda2c5a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.944714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.945051 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.949291 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a8f77b-e218-4975-9411-8c380eda2c5a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.956588 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a8f77b-e218-4975-9411-8c380eda2c5a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.972841 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a8f77b-e218-4975-9411-8c380eda2c5a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.985727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwtzd\" (UniqueName: \"kubernetes.io/projected/10a8f77b-e218-4975-9411-8c380eda2c5a-kube-api-access-rwtzd\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.986801 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:11:54 crc kubenswrapper[4811]: I0216 21:11:54.986843 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dbd8c32490b2f77365def3a0371676e5ac040a58b4a882ca169dfe23c699baa5/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.039440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb80460f-1f4e-4e10-8a53-7f0b3e0a80fa\") pod \"openstack-cell1-galera-0\" (UID: \"10a8f77b-e218-4975-9411-8c380eda2c5a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.044999 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l4kc\" (UniqueName: \"kubernetes.io/projected/211f2606-1d07-4c2d-8533-d53495a99d5b-kube-api-access-9l4kc\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.045041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/211f2606-1d07-4c2d-8533-d53495a99d5b-kolla-config\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.045061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/211f2606-1d07-4c2d-8533-d53495a99d5b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.045087 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211f2606-1d07-4c2d-8533-d53495a99d5b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.045118 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211f2606-1d07-4c2d-8533-d53495a99d5b-config-data\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.045787 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/211f2606-1d07-4c2d-8533-d53495a99d5b-config-data\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.046587 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/211f2606-1d07-4c2d-8533-d53495a99d5b-kolla-config\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.054919 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211f2606-1d07-4c2d-8533-d53495a99d5b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.068230 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/211f2606-1d07-4c2d-8533-d53495a99d5b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.076829 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l4kc\" (UniqueName: \"kubernetes.io/projected/211f2606-1d07-4c2d-8533-d53495a99d5b-kube-api-access-9l4kc\") pod \"memcached-0\" (UID: \"211f2606-1d07-4c2d-8533-d53495a99d5b\") " pod="openstack/memcached-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.106530 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:11:55 crc kubenswrapper[4811]: I0216 21:11:55.141761 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:11:56 crc kubenswrapper[4811]: I0216 21:11:56.976327 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:11:56 crc kubenswrapper[4811]: I0216 21:11:56.977541 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:11:56 crc kubenswrapper[4811]: I0216 21:11:56.981842 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mmjzm" Feb 16 21:11:56 crc kubenswrapper[4811]: I0216 21:11:56.988998 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.087146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxrj\" (UniqueName: \"kubernetes.io/projected/ffc95bb9-a405-4472-9879-f2dc826ffdb9-kube-api-access-tmxrj\") pod \"kube-state-metrics-0\" (UID: \"ffc95bb9-a405-4472-9879-f2dc826ffdb9\") " pod="openstack/kube-state-metrics-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.188918 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmxrj\" (UniqueName: \"kubernetes.io/projected/ffc95bb9-a405-4472-9879-f2dc826ffdb9-kube-api-access-tmxrj\") pod \"kube-state-metrics-0\" (UID: \"ffc95bb9-a405-4472-9879-f2dc826ffdb9\") " pod="openstack/kube-state-metrics-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.211061 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92bf7"] Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.212817 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.219496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmxrj\" (UniqueName: \"kubernetes.io/projected/ffc95bb9-a405-4472-9879-f2dc826ffdb9-kube-api-access-tmxrj\") pod \"kube-state-metrics-0\" (UID: \"ffc95bb9-a405-4472-9879-f2dc826ffdb9\") " pod="openstack/kube-state-metrics-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.249391 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92bf7"] Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.293486 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s26\" (UniqueName: \"kubernetes.io/projected/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-kube-api-access-r7s26\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.293587 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-catalog-content\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.293607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-utilities\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.299478 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.394980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-catalog-content\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.395257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-utilities\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.395317 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s26\" (UniqueName: \"kubernetes.io/projected/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-kube-api-access-r7s26\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.395780 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-utilities\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.396320 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-catalog-content\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.420799 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s26\" (UniqueName: \"kubernetes.io/projected/6fb34ae7-4d56-44b0-9db6-c890b1d57fdf-kube-api-access-r7s26\") pod \"certified-operators-92bf7\" (UID: \"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf\") " pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.578071 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.723781 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.723823 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.734891 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.736645 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.738876 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.739072 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.739208 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.740113 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.743288 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-ffqqt" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.760080 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.801879 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.801920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnx7q\" (UniqueName: \"kubernetes.io/projected/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-kube-api-access-cnx7q\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.802043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.802095 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.802149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.802267 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.802316 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.821083 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnx7q\" (UniqueName: \"kubernetes.io/projected/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-kube-api-access-cnx7q\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903462 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903538 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.903968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.907405 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.908544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.908866 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.909322 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.909790 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:57 crc kubenswrapper[4811]: I0216 21:11:57.923900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnx7q\" (UniqueName: \"kubernetes.io/projected/117fc5a2-d29b-4844-9dc6-4359d1c4c24d-kube-api-access-cnx7q\") pod \"alertmanager-metric-storage-0\" (UID: \"117fc5a2-d29b-4844-9dc6-4359d1c4c24d\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.054100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.299862 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.313299 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.313621 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.319366 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.319409 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.319704 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.319569 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.319624 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-p56vd" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.319663 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.322416 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.322530 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.413918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414320 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgs5z\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-kube-api-access-lgs5z\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414385 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414487 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414515 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414546 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414578 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-config\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414650 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.414676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4247055a-8ca2-4a03-9a3a-d582d674b38a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.499861 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516305 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516348 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4247055a-8ca2-4a03-9a3a-d582d674b38a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516373 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgs5z\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-kube-api-access-lgs5z\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516440 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516505 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.516561 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-config\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.517604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.517650 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.518048 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.524760 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4247055a-8ca2-4a03-9a3a-d582d674b38a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.533829 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-config\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.533955 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.534139 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.534424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.535523 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.535650 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18767a3611d798f8934d1c357327d08a5ff746f9fb9afdbc502a0d35823d9e91/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.537663 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgs5z\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-kube-api-access-lgs5z\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.572851 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:58 crc kubenswrapper[4811]: I0216 21:11:58.654056 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:11:59 crc kubenswrapper[4811]: I0216 21:11:59.448669 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40263486-d6cd-4aa0-9570-affea970096f","Type":"ContainerStarted","Data":"fa16cf71cfc2daa8426fbefc7aa54670e18922f3edb26ad10ccf95b0d8cb8077"} Feb 16 21:12:00 crc kubenswrapper[4811]: I0216 21:12:00.193552 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpdwb"] Feb 16 21:12:00 crc kubenswrapper[4811]: I0216 21:12:00.465996 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gpdwb" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="registry-server" containerID="cri-o://e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c" gracePeriod=2 Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.033634 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qhsfb"] Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.034954 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.037557 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kkmxv" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.038027 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.038160 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.065186 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qhsfb"] Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.074738 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fktqj"] Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.077887 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.100113 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fktqj"] Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-log-ovn\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179168 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-log\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179188 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-etc-ovs\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179218 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8edc00a-d032-460b-9e97-d784b4fdfe5c-combined-ca-bundle\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5lc\" (UniqueName: \"kubernetes.io/projected/b8edc00a-d032-460b-9e97-d784b4fdfe5c-kube-api-access-5m5lc\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-lib\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179281 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8edc00a-d032-460b-9e97-d784b4fdfe5c-scripts\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179306 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xcnp\" (UniqueName: \"kubernetes.io/projected/08f73916-0e3c-4ef7-97e7-a13b9923b620-kube-api-access-4xcnp\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-run\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179392 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8edc00a-d032-460b-9e97-d784b4fdfe5c-ovn-controller-tls-certs\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-run\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179548 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-run-ovn\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.179666 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08f73916-0e3c-4ef7-97e7-a13b9923b620-scripts\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281566 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8edc00a-d032-460b-9e97-d784b4fdfe5c-scripts\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281627 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xcnp\" (UniqueName: \"kubernetes.io/projected/08f73916-0e3c-4ef7-97e7-a13b9923b620-kube-api-access-4xcnp\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281682 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-run\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281713 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8edc00a-d032-460b-9e97-d784b4fdfe5c-ovn-controller-tls-certs\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281766 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-run\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281794 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-run-ovn\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281838 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08f73916-0e3c-4ef7-97e7-a13b9923b620-scripts\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281864 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-log-ovn\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-log\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281911 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-etc-ovs\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281930 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8edc00a-d032-460b-9e97-d784b4fdfe5c-combined-ca-bundle\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281951 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5lc\" (UniqueName: \"kubernetes.io/projected/b8edc00a-d032-460b-9e97-d784b4fdfe5c-kube-api-access-5m5lc\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.281974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-lib\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.282537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-lib\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-run\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284521 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-run\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284651 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-etc-ovs\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8edc00a-d032-460b-9e97-d784b4fdfe5c-scripts\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284785 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-run-ovn\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8edc00a-d032-460b-9e97-d784b4fdfe5c-var-log-ovn\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.284898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/08f73916-0e3c-4ef7-97e7-a13b9923b620-var-log\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.286272 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08f73916-0e3c-4ef7-97e7-a13b9923b620-scripts\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.291998 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8edc00a-d032-460b-9e97-d784b4fdfe5c-combined-ca-bundle\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.292015 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8edc00a-d032-460b-9e97-d784b4fdfe5c-ovn-controller-tls-certs\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.304608 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xcnp\" (UniqueName: \"kubernetes.io/projected/08f73916-0e3c-4ef7-97e7-a13b9923b620-kube-api-access-4xcnp\") pod \"ovn-controller-ovs-fktqj\" (UID: \"08f73916-0e3c-4ef7-97e7-a13b9923b620\") " pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.304769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5lc\" (UniqueName: \"kubernetes.io/projected/b8edc00a-d032-460b-9e97-d784b4fdfe5c-kube-api-access-5m5lc\") pod \"ovn-controller-qhsfb\" (UID: \"b8edc00a-d032-460b-9e97-d784b4fdfe5c\") " pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.376448 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.409816 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.493416 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerID="e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c" exitCode=0 Feb 16 21:12:01 crc kubenswrapper[4811]: I0216 21:12:01.494079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpdwb" event={"ID":"e6100253-d183-48b3-bcdf-f193f07d42a1","Type":"ContainerDied","Data":"e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c"} Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.294263 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.296166 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.297916 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.298305 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.299009 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fdvpn" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.299219 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.299405 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.348258 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441252 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmq7\" (UniqueName: \"kubernetes.io/projected/c8c25051-577c-41fd-a7af-fec64121e954-kube-api-access-zdmq7\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441654 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8c25051-577c-41fd-a7af-fec64121e954-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441700 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8c25051-577c-41fd-a7af-fec64121e954-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441770 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441806 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441844 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.441881 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c25051-577c-41fd-a7af-fec64121e954-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.493693 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.495057 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.500958 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.501119 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-86kvx" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.501351 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.501498 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.524704 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8c25051-577c-41fd-a7af-fec64121e954-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543714 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543739 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543808 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c25051-577c-41fd-a7af-fec64121e954-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543893 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmq7\" (UniqueName: \"kubernetes.io/projected/c8c25051-577c-41fd-a7af-fec64121e954-kube-api-access-zdmq7\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.543912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8c25051-577c-41fd-a7af-fec64121e954-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.544282 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8c25051-577c-41fd-a7af-fec64121e954-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.544641 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8c25051-577c-41fd-a7af-fec64121e954-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.545700 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c25051-577c-41fd-a7af-fec64121e954-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.551361 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.551773 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.551795 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a940333f89dd2facb0bbd5e018623df7d88291962adca3f6402f8a4cc0e7c0e9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.556440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.562865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c25051-577c-41fd-a7af-fec64121e954-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.566028 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmq7\" (UniqueName: \"kubernetes.io/projected/c8c25051-577c-41fd-a7af-fec64121e954-kube-api-access-zdmq7\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.589034 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-950ae39f-fa84-4c47-9ab2-dea5a4f61ed8\") pod \"ovsdbserver-nb-0\" (UID: \"c8c25051-577c-41fd-a7af-fec64121e954\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.633069 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8d8\" (UniqueName: \"kubernetes.io/projected/7a6c69be-2c47-4bcd-906e-ab109340067b-kube-api-access-qk8d8\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645769 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a6c69be-2c47-4bcd-906e-ab109340067b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645830 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a6c69be-2c47-4bcd-906e-ab109340067b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645860 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6c69be-2c47-4bcd-906e-ab109340067b-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645905 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.645976 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.646012 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.747863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.747962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8d8\" (UniqueName: \"kubernetes.io/projected/7a6c69be-2c47-4bcd-906e-ab109340067b-kube-api-access-qk8d8\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.748008 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a6c69be-2c47-4bcd-906e-ab109340067b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.748038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.748073 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a6c69be-2c47-4bcd-906e-ab109340067b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.748115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6c69be-2c47-4bcd-906e-ab109340067b-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.748154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.748247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.749305 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a6c69be-2c47-4bcd-906e-ab109340067b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.749898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a6c69be-2c47-4bcd-906e-ab109340067b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.750463 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6c69be-2c47-4bcd-906e-ab109340067b-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.751515 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.751543 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d734ee8dbbae1ae3c3e541aaadfe29931f89cd5487c9d427f387f5cd28ba7668/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.763984 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.765186 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.771003 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8d8\" (UniqueName: \"kubernetes.io/projected/7a6c69be-2c47-4bcd-906e-ab109340067b-kube-api-access-qk8d8\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.774955 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6c69be-2c47-4bcd-906e-ab109340067b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.812318 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb82517c-1c6e-49f9-aec6-b974cf290baf\") pod \"ovsdbserver-sb-0\" (UID: \"7a6c69be-2c47-4bcd-906e-ab109340067b\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:04 crc kubenswrapper[4811]: I0216 21:12:04.829116 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:07 crc kubenswrapper[4811]: E0216 21:12:07.724955 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c is running failed: container process not found" containerID="e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 21:12:07 crc kubenswrapper[4811]: E0216 21:12:07.726081 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c is running failed: container process not found" containerID="e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 21:12:07 crc kubenswrapper[4811]: E0216 21:12:07.726510 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c is running failed: container process not found" containerID="e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 21:12:07 crc kubenswrapper[4811]: E0216 21:12:07.726553 4811 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-gpdwb" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="registry-server" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.010071 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.011433 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.027149 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-9ggt2" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.027457 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.027584 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.028009 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.028274 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.036330 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.085241 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.145568 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.145621 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.145642 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.145673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tgg\" (UniqueName: \"kubernetes.io/projected/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-kube-api-access-r6tgg\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.145837 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.220638 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.221585 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.227497 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.228589 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.228715 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.249014 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tgg\" (UniqueName: \"kubernetes.io/projected/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-kube-api-access-r6tgg\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.251401 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.253143 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.253230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.253268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.253288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.255087 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.259379 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.265023 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.280115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tgg\" (UniqueName: \"kubernetes.io/projected/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-kube-api-access-r6tgg\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.280844 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/74ea8ac5-2a83-484e-b8bc-ddf8c7045e00-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-68wjj\" (UID: \"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.316685 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.317849 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.321213 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.321434 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.335019 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.348811 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.355705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.355764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb32d76e-7b43-4a6f-9d01-922be5156eec-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.355803 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkgfz\" (UniqueName: \"kubernetes.io/projected/cb32d76e-7b43-4a6f-9d01-922be5156eec-kube-api-access-gkgfz\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.355845 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.355921 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.355948 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.428264 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.429364 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.434182 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.434570 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.434691 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.434804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.434945 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.435064 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.435220 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-ld98w" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.449554 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.450671 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458402 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458458 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb32d76e-7b43-4a6f-9d01-922be5156eec-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458491 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkgfz\" (UniqueName: \"kubernetes.io/projected/cb32d76e-7b43-4a6f-9d01-922be5156eec-kube-api-access-gkgfz\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cmj\" (UniqueName: \"kubernetes.io/projected/5557acf3-367b-4296-a944-d52fb4545738-kube-api-access-w2cmj\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458599 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5557acf3-367b-4296-a944-d52fb4545738-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.458633 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.462650 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.464033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.464937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb32d76e-7b43-4a6f-9d01-922be5156eec-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.467401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.473290 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.476664 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb32d76e-7b43-4a6f-9d01-922be5156eec-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.492474 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn"] Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.495889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkgfz\" (UniqueName: \"kubernetes.io/projected/cb32d76e-7b43-4a6f-9d01-922be5156eec-kube-api-access-gkgfz\") pod \"cloudkitty-lokistack-querier-58c84b5844-jld6l\" (UID: \"cb32d76e-7b43-4a6f-9d01-922be5156eec\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562103 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562135 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbcks\" (UniqueName: \"kubernetes.io/projected/abc1ac44-b93b-4a99-af90-c0b9c9839e96-kube-api-access-fbcks\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562168 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562222 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562256 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562311 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cmj\" (UniqueName: \"kubernetes.io/projected/5557acf3-367b-4296-a944-d52fb4545738-kube-api-access-w2cmj\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562503 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5557acf3-367b-4296-a944-d52fb4545738-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562524 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562540 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562563 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562608 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562629 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9mq\" (UniqueName: \"kubernetes.io/projected/a4296913-66bb-481c-a5a8-b667e191ae73-kube-api-access-lf9mq\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562653 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.562709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.563878 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5557acf3-367b-4296-a944-d52fb4545738-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.564327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.566819 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.568157 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5557acf3-367b-4296-a944-d52fb4545738-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.588017 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cmj\" (UniqueName: \"kubernetes.io/projected/5557acf3-367b-4296-a944-d52fb4545738-kube-api-access-w2cmj\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb\" (UID: \"5557acf3-367b-4296-a944-d52fb4545738\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.625325 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.650673 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.666899 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.666973 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf9mq\" (UniqueName: \"kubernetes.io/projected/a4296913-66bb-481c-a5a8-b667e191ae73-kube-api-access-lf9mq\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667066 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667092 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667231 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbcks\" (UniqueName: \"kubernetes.io/projected/abc1ac44-b93b-4a99-af90-c0b9c9839e96-kube-api-access-fbcks\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667287 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667355 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667382 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667448 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.667512 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.668297 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.668492 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.668687 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.668917 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.669273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.669316 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.669492 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.669618 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/abc1ac44-b93b-4a99-af90-c0b9c9839e96-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.669726 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.669868 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.679256 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.679287 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.679299 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.679626 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.679644 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/abc1ac44-b93b-4a99-af90-c0b9c9839e96-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.693031 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a4296913-66bb-481c-a5a8-b667e191ae73-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.695875 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf9mq\" (UniqueName: \"kubernetes.io/projected/a4296913-66bb-481c-a5a8-b667e191ae73-kube-api-access-lf9mq\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-kx7gd\" (UID: \"a4296913-66bb-481c-a5a8-b667e191ae73\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.702970 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbcks\" (UniqueName: \"kubernetes.io/projected/abc1ac44-b93b-4a99-af90-c0b9c9839e96-kube-api-access-fbcks\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-pcfgn\" (UID: \"abc1ac44-b93b-4a99-af90-c0b9c9839e96\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.753649 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:09 crc kubenswrapper[4811]: I0216 21:12:09.802673 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.198913 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.200592 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.203805 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.215925 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.223808 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.273782 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.274957 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.290726 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.290894 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.297565 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.368411 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.369784 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.374619 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.374866 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388337 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388379 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d41079d-f556-47e9-bc54-75dc6461451e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjrz4\" (UniqueName: \"kubernetes.io/projected/1d41079d-f556-47e9-bc54-75dc6461451e-kube-api-access-mjrz4\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388458 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388484 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388502 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388543 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388562 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht66z\" (UniqueName: \"kubernetes.io/projected/5f050753-85f4-413e-92b6-0503db5e7391-kube-api-access-ht66z\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388614 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388637 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f050753-85f4-413e-92b6-0503db5e7391-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388653 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.388696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.399260 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.490862 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.490915 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.490940 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d41079d-f556-47e9-bc54-75dc6461451e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.490969 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjrz4\" (UniqueName: \"kubernetes.io/projected/1d41079d-f556-47e9-bc54-75dc6461451e-kube-api-access-mjrz4\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491079 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491141 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491348 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht66z\" (UniqueName: \"kubernetes.io/projected/5f050753-85f4-413e-92b6-0503db5e7391-kube-api-access-ht66z\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491655 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f050753-85f4-413e-92b6-0503db5e7391-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491819 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.491876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.492944 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f050753-85f4-413e-92b6-0503db5e7391-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.493255 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.493321 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.493869 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.494641 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d41079d-f556-47e9-bc54-75dc6461451e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.510734 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.513330 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.514853 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.516514 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.517598 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.518274 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.518626 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/1d41079d-f556-47e9-bc54-75dc6461451e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.524482 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht66z\" (UniqueName: \"kubernetes.io/projected/5f050753-85f4-413e-92b6-0503db5e7391-kube-api-access-ht66z\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.525423 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5f050753-85f4-413e-92b6-0503db5e7391-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.554434 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.556788 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjrz4\" (UniqueName: \"kubernetes.io/projected/1d41079d-f556-47e9-bc54-75dc6461451e-kube-api-access-mjrz4\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.571271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"1d41079d-f556-47e9-bc54-75dc6461451e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593342 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593367 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593407 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4bz\" (UniqueName: \"kubernetes.io/projected/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-kube-api-access-wc4bz\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.593753 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.599419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"5f050753-85f4-413e-92b6-0503db5e7391\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.612559 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695681 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695744 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695781 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4bz\" (UniqueName: \"kubernetes.io/projected/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-kube-api-access-wc4bz\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695871 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.695940 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.699814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.700095 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.700356 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.700701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.702624 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.721819 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.722395 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4bz\" (UniqueName: \"kubernetes.io/projected/4058dcf3-ddd9-4d4f-b909-9f7b0323c65a-kube-api-access-wc4bz\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.819926 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:10 crc kubenswrapper[4811]: I0216 21:12:10.994332 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:11 crc kubenswrapper[4811]: E0216 21:12:11.713836 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:12:11 crc kubenswrapper[4811]: E0216 21:12:11.714065 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl8ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-gncf7_openstack(400bc5f6-6b87-4af8-9fa9-4429afb77168): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:12:11 crc kubenswrapper[4811]: E0216 21:12:11.715284 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" Feb 16 21:12:12 crc kubenswrapper[4811]: E0216 21:12:12.612484 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.009035 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.009229 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfqln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(cd541633-15e7-4a12-99a4-72637521386d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.010446 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="cd541633-15e7-4a12-99a4-72637521386d" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.085431 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.085695 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7nvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9vqg2_openstack(ff07a920-37d9-4e47-b0ed-a7319602bc75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.088006 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" podUID="ff07a920-37d9-4e47-b0ed-a7319602bc75" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.113509 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.114332 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dv6gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8rqf7_openstack(586092f5-7d82-4113-b7be-2753a057b7f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:12:13 crc kubenswrapper[4811]: E0216 21:12:13.116023 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" podUID="586092f5-7d82-4113-b7be-2753a057b7f6" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.262142 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.343132 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-utilities\") pod \"e6100253-d183-48b3-bcdf-f193f07d42a1\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.343471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-catalog-content\") pod \"e6100253-d183-48b3-bcdf-f193f07d42a1\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.343575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4p6s\" (UniqueName: \"kubernetes.io/projected/e6100253-d183-48b3-bcdf-f193f07d42a1-kube-api-access-t4p6s\") pod \"e6100253-d183-48b3-bcdf-f193f07d42a1\" (UID: \"e6100253-d183-48b3-bcdf-f193f07d42a1\") " Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.344036 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-utilities" (OuterVolumeSpecName: "utilities") pod "e6100253-d183-48b3-bcdf-f193f07d42a1" (UID: "e6100253-d183-48b3-bcdf-f193f07d42a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.344868 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.347702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6100253-d183-48b3-bcdf-f193f07d42a1-kube-api-access-t4p6s" (OuterVolumeSpecName: "kube-api-access-t4p6s") pod "e6100253-d183-48b3-bcdf-f193f07d42a1" (UID: "e6100253-d183-48b3-bcdf-f193f07d42a1"). InnerVolumeSpecName "kube-api-access-t4p6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.395289 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6100253-d183-48b3-bcdf-f193f07d42a1" (UID: "e6100253-d183-48b3-bcdf-f193f07d42a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.447807 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6100253-d183-48b3-bcdf-f193f07d42a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.447841 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4p6s\" (UniqueName: \"kubernetes.io/projected/e6100253-d183-48b3-bcdf-f193f07d42a1-kube-api-access-t4p6s\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.657404 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ffc95bb9-a405-4472-9879-f2dc826ffdb9","Type":"ContainerStarted","Data":"a0541629acfdc2b5f0bc379bc515010f8bf31491b0b50b24864cfc0f21dc5705"} Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.726876 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpdwb" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.726937 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpdwb" event={"ID":"e6100253-d183-48b3-bcdf-f193f07d42a1","Type":"ContainerDied","Data":"d337e5ecee55ab30330dd42df914f83ecf61b5300b50c924b7cc6ff6508fda0d"} Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.727016 4811 scope.go:117] "RemoveContainer" containerID="e8f122215a6c2e0fdd6370abe92235ed41ca46b90b12181d96f2033073b1cf9c" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.799615 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.891172 4811 scope.go:117] "RemoveContainer" containerID="fa7618e858d0b78093ba5d2170f36c52af83dc4d1c4f5b96af8ecbb0a229e136" Feb 16 21:12:13 crc kubenswrapper[4811]: I0216 21:12:13.916274 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.005759 4811 scope.go:117] "RemoveContainer" containerID="2617a2dd076da2c786dc73abb7c04aa4c0680eecb952c0151b4f94f24e482620" Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.020157 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpdwb"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.046602 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gpdwb"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.275274 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92bf7"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.301456 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.377809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:12:14 crc kubenswrapper[4811]: W0216 21:12:14.436693 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32a12c18_c799_4092_8ba9_c89b2a5f713a.slice/crio-f8617be2dd0f982e45e55fa5985a0318ce0c62e6431d1b257812ea278493e8c8 WatchSource:0}: Error finding container f8617be2dd0f982e45e55fa5985a0318ce0c62e6431d1b257812ea278493e8c8: Status 404 returned error can't find the container with id f8617be2dd0f982e45e55fa5985a0318ce0c62e6431d1b257812ea278493e8c8 Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.634644 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.651640 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.691073 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.722308 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" path="/var/lib/kubelet/pods/e6100253-d183-48b3-bcdf-f193f07d42a1/volumes" Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.754227 4811 generic.go:334] "Generic (PLEG): container finished" podID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerID="a8f079a1c0c0e91de914f46dfc3f7d3f9e53691ebe8160be093eebd4911a2a1a" exitCode=0 Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.754299 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" event={"ID":"285b4d00-7d22-44c0-8a35-6f076f3135a7","Type":"ContainerDied","Data":"a8f079a1c0c0e91de914f46dfc3f7d3f9e53691ebe8160be093eebd4911a2a1a"} Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.767015 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.769410 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92bf7" event={"ID":"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf","Type":"ContainerStarted","Data":"cc2aee0736b41dec064893ed155f2f059b4ba2540c9ce795e9ac00fdda0b60b7"} Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.773395 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"211f2606-1d07-4c2d-8533-d53495a99d5b","Type":"ContainerStarted","Data":"04cfe589f5251abe0a6dfbaf5ef97d13b9ce4933bcd4d0076ae099cba12ebb47"} Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.775247 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"10a8f77b-e218-4975-9411-8c380eda2c5a","Type":"ContainerStarted","Data":"e280c9a025bac194458b2b5d61ccff4281507bec2289b26ece06124c067e568d"} Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.776462 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32a12c18-c799-4092-8ba9-c89b2a5f713a","Type":"ContainerStarted","Data":"f8617be2dd0f982e45e55fa5985a0318ce0c62e6431d1b257812ea278493e8c8"} Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.788030 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"117fc5a2-d29b-4844-9dc6-4359d1c4c24d","Type":"ContainerStarted","Data":"9ae05292b6e8145f98d33a086c7fbfdda870cae7f6988ccea7341274feda067f"} Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.879578 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fktqj"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.961281 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 21:12:14 crc kubenswrapper[4811]: I0216 21:12:14.985581 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.003597 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.014089 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.022948 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.031606 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.039013 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qhsfb"] Feb 16 21:12:15 crc kubenswrapper[4811]: W0216 21:12:15.148813 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5557acf3_367b_4296_a944_d52fb4545738.slice/crio-6d355b78c3265084f564064fa803614c968a166bb726f68e5fa2a278a8145548 WatchSource:0}: Error finding container 6d355b78c3265084f564064fa803614c968a166bb726f68e5fa2a278a8145548: Status 404 returned error can't find the container with id 6d355b78c3265084f564064fa803614c968a166bb726f68e5fa2a278a8145548 Feb 16 21:12:15 crc kubenswrapper[4811]: W0216 21:12:15.151036 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74ea8ac5_2a83_484e_b8bc_ddf8c7045e00.slice/crio-5b180929121e319714e0c6c5a5831e3027150d671aa6d1ce0b893a3092dd9081 WatchSource:0}: Error finding container 5b180929121e319714e0c6c5a5831e3027150d671aa6d1ce0b893a3092dd9081: Status 404 returned error can't find the container with id 5b180929121e319714e0c6c5a5831e3027150d671aa6d1ce0b893a3092dd9081 Feb 16 21:12:15 crc kubenswrapper[4811]: E0216 21:12:15.178211 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkgfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-58c84b5844-jld6l_openstack(cb32d76e-7b43-4a6f-9d01-922be5156eec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:12:15 crc kubenswrapper[4811]: E0216 21:12:15.179573 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" podUID="cb32d76e-7b43-4a6f-9d01-922be5156eec" Feb 16 21:12:15 crc kubenswrapper[4811]: E0216 21:12:15.206932 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6tgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-68wjj_openstack(74ea8ac5-2a83-484e-b8bc-ddf8c7045e00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:12:15 crc kubenswrapper[4811]: E0216 21:12:15.208188 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" podUID="74ea8ac5-2a83-484e-b8bc-ddf8c7045e00" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.217305 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.221057 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.295864 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.341799 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7nvd\" (UniqueName: \"kubernetes.io/projected/ff07a920-37d9-4e47-b0ed-a7319602bc75-kube-api-access-b7nvd\") pod \"ff07a920-37d9-4e47-b0ed-a7319602bc75\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.341877 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv6gv\" (UniqueName: \"kubernetes.io/projected/586092f5-7d82-4113-b7be-2753a057b7f6-kube-api-access-dv6gv\") pod \"586092f5-7d82-4113-b7be-2753a057b7f6\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.341914 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586092f5-7d82-4113-b7be-2753a057b7f6-config\") pod \"586092f5-7d82-4113-b7be-2753a057b7f6\" (UID: \"586092f5-7d82-4113-b7be-2753a057b7f6\") " Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.341968 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-dns-svc\") pod \"ff07a920-37d9-4e47-b0ed-a7319602bc75\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.342047 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-config\") pod \"ff07a920-37d9-4e47-b0ed-a7319602bc75\" (UID: \"ff07a920-37d9-4e47-b0ed-a7319602bc75\") " Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.342901 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-config" (OuterVolumeSpecName: "config") pod "ff07a920-37d9-4e47-b0ed-a7319602bc75" (UID: "ff07a920-37d9-4e47-b0ed-a7319602bc75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.342952 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586092f5-7d82-4113-b7be-2753a057b7f6-config" (OuterVolumeSpecName: "config") pod "586092f5-7d82-4113-b7be-2753a057b7f6" (UID: "586092f5-7d82-4113-b7be-2753a057b7f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.343314 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff07a920-37d9-4e47-b0ed-a7319602bc75" (UID: "ff07a920-37d9-4e47-b0ed-a7319602bc75"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.346274 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff07a920-37d9-4e47-b0ed-a7319602bc75-kube-api-access-b7nvd" (OuterVolumeSpecName: "kube-api-access-b7nvd") pod "ff07a920-37d9-4e47-b0ed-a7319602bc75" (UID: "ff07a920-37d9-4e47-b0ed-a7319602bc75"). InnerVolumeSpecName "kube-api-access-b7nvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.349134 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586092f5-7d82-4113-b7be-2753a057b7f6-kube-api-access-dv6gv" (OuterVolumeSpecName: "kube-api-access-dv6gv") pod "586092f5-7d82-4113-b7be-2753a057b7f6" (UID: "586092f5-7d82-4113-b7be-2753a057b7f6"). InnerVolumeSpecName "kube-api-access-dv6gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.462433 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586092f5-7d82-4113-b7be-2753a057b7f6-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.465422 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.466344 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff07a920-37d9-4e47-b0ed-a7319602bc75-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.466394 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7nvd\" (UniqueName: \"kubernetes.io/projected/ff07a920-37d9-4e47-b0ed-a7319602bc75-kube-api-access-b7nvd\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.466413 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv6gv\" (UniqueName: \"kubernetes.io/projected/586092f5-7d82-4113-b7be-2753a057b7f6-kube-api-access-dv6gv\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.797099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" event={"ID":"ff07a920-37d9-4e47-b0ed-a7319602bc75","Type":"ContainerDied","Data":"a1a453cb6642304602c44e667d5cb921199afc9f25ff2990652b1fd3de7c273c"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.797117 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9vqg2" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.798545 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" event={"ID":"586092f5-7d82-4113-b7be-2753a057b7f6","Type":"ContainerDied","Data":"8ac089884a1d6f985d3652058de22a736d4cf0b2be180779954cdd4377390572"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.798574 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8rqf7" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.800497 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"1d41079d-f556-47e9-bc54-75dc6461451e","Type":"ContainerStarted","Data":"1b2b11869719e68ce66b647e4b2f597bc47f1a8961ad5e5526b62630c3862bb1"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.802112 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" event={"ID":"cb32d76e-7b43-4a6f-9d01-922be5156eec","Type":"ContainerStarted","Data":"5ce47b3129926fdb2d9e5313fdd873c901b8e37685cb313987c12e3b785e3cb7"} Feb 16 21:12:15 crc kubenswrapper[4811]: E0216 21:12:15.808553 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" podUID="cb32d76e-7b43-4a6f-9d01-922be5156eec" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.812541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40263486-d6cd-4aa0-9570-affea970096f","Type":"ContainerStarted","Data":"f36361c6eae49f5925bc5dae3a142e7b838b079bb69edbe0c88c890b5bf9d97f"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.815747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8c25051-577c-41fd-a7af-fec64121e954","Type":"ContainerStarted","Data":"3f390602c27fe68635ac59b4a15a831a19bae935c2f210082b07a3939b9ccde3"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.818293 4811 generic.go:334] "Generic (PLEG): container finished" podID="6fb34ae7-4d56-44b0-9db6-c890b1d57fdf" containerID="77014d6f553a4ee5e3c15f228ddf80861d0616bb14bfbb4396526e6ebcca7485" exitCode=0 Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.818380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92bf7" event={"ID":"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf","Type":"ContainerDied","Data":"77014d6f553a4ee5e3c15f228ddf80861d0616bb14bfbb4396526e6ebcca7485"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.820162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fktqj" event={"ID":"08f73916-0e3c-4ef7-97e7-a13b9923b620","Type":"ContainerStarted","Data":"5d81513d8c2e9a198cac6daf9ece9dae3c981ab632200030f8d9947d4110647b"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.821307 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a","Type":"ContainerStarted","Data":"4eee8e8749e9f606dd05f2e7160551ee9b770c7539f8438296a58d62a06a4502"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.822858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" event={"ID":"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00","Type":"ContainerStarted","Data":"5b180929121e319714e0c6c5a5831e3027150d671aa6d1ce0b893a3092dd9081"} Feb 16 21:12:15 crc kubenswrapper[4811]: E0216 21:12:15.824576 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" podUID="74ea8ac5-2a83-484e-b8bc-ddf8c7045e00" Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.825444 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" event={"ID":"5557acf3-367b-4296-a944-d52fb4545738","Type":"ContainerStarted","Data":"6d355b78c3265084f564064fa803614c968a166bb726f68e5fa2a278a8145548"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.826971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" event={"ID":"abc1ac44-b93b-4a99-af90-c0b9c9839e96","Type":"ContainerStarted","Data":"1b3c9c8b4a34151ed40f21ed9703081443832040b7ce7ec22374fa420db7bff6"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.840846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qhsfb" event={"ID":"b8edc00a-d032-460b-9e97-d784b4fdfe5c","Type":"ContainerStarted","Data":"4c881524168bc0fc56855bb12fd07701e2097d5e1d4a38c062594bb579d5cdff"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.842684 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cd541633-15e7-4a12-99a4-72637521386d","Type":"ContainerStarted","Data":"7caad1429ce704c8fb74cbbcdef94962dcaecef310a578dd1035d4bc6a9d0f1c"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.847932 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" event={"ID":"a4296913-66bb-481c-a5a8-b667e191ae73","Type":"ContainerStarted","Data":"9eb6aba3d18619af3e42a0ab01945291143e83a9f05fd554ffe097d4989c7035"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.849680 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"5f050753-85f4-413e-92b6-0503db5e7391","Type":"ContainerStarted","Data":"d882da75d48b64083d65d895bef048a1de59d35133d8d9a6b62c98ba96b1609f"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.855906 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a6c69be-2c47-4bcd-906e-ab109340067b","Type":"ContainerStarted","Data":"d5b2d9be04c94742c63e6d17200e91fbeadfab13868dc68536d264e9a22ffe0b"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.867712 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerStarted","Data":"6d8fb14050b5799345e1524def7a0c2c30e0adf6e124a95cff86937c1ed6cf40"} Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.915041 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9vqg2"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.928339 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9vqg2"] Feb 16 21:12:15 crc kubenswrapper[4811]: I0216 21:12:15.997715 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8rqf7"] Feb 16 21:12:16 crc kubenswrapper[4811]: I0216 21:12:16.006676 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8rqf7"] Feb 16 21:12:16 crc kubenswrapper[4811]: I0216 21:12:16.712613 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586092f5-7d82-4113-b7be-2753a057b7f6" path="/var/lib/kubelet/pods/586092f5-7d82-4113-b7be-2753a057b7f6/volumes" Feb 16 21:12:16 crc kubenswrapper[4811]: I0216 21:12:16.713211 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff07a920-37d9-4e47-b0ed-a7319602bc75" path="/var/lib/kubelet/pods/ff07a920-37d9-4e47-b0ed-a7319602bc75/volumes" Feb 16 21:12:16 crc kubenswrapper[4811]: E0216 21:12:16.877941 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" podUID="cb32d76e-7b43-4a6f-9d01-922be5156eec" Feb 16 21:12:16 crc kubenswrapper[4811]: E0216 21:12:16.879423 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" podUID="74ea8ac5-2a83-484e-b8bc-ddf8c7045e00" Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.366088 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.366508 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.366569 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.367385 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aec5c764f743f1a4d04f239fd31aa099d13a84893ba733482b70a62ad8b5e0d2"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.367445 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://aec5c764f743f1a4d04f239fd31aa099d13a84893ba733482b70a62ad8b5e0d2" gracePeriod=600 Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.893990 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="aec5c764f743f1a4d04f239fd31aa099d13a84893ba733482b70a62ad8b5e0d2" exitCode=0 Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.894034 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"aec5c764f743f1a4d04f239fd31aa099d13a84893ba733482b70a62ad8b5e0d2"} Feb 16 21:12:18 crc kubenswrapper[4811]: I0216 21:12:18.894098 4811 scope.go:117] "RemoveContainer" containerID="15b3c1409544ddca121710199668aff9f31624230e68744253cb5ac3f7bbbf00" Feb 16 21:12:28 crc kubenswrapper[4811]: E0216 21:12:28.011784 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 21:12:28 crc kubenswrapper[4811]: E0216 21:12:28.012208 4811 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 21:12:28 crc kubenswrapper[4811]: E0216 21:12:28.012336 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmxrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(ffc95bb9-a405-4472-9879-f2dc826ffdb9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 21:12:28 crc kubenswrapper[4811]: E0216 21:12:28.013823 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" Feb 16 21:12:28 crc kubenswrapper[4811]: E0216 21:12:28.989311 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.041736 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"c5a0cef66cb330788b58ea1a5723377ba1dc93aa2016d4d0b1ec1df645e788ff"} Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.044254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" event={"ID":"285b4d00-7d22-44c0-8a35-6f076f3135a7","Type":"ContainerStarted","Data":"47cc52678d1e017e30cdb29dae3c2ea08815bb78597c07833d3ec61ef25b1bde"} Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.044476 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.046437 4811 generic.go:334] "Generic (PLEG): container finished" podID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerID="ae89f4c305523e61d101d43279454585023260213a304080d2aca16789a34766" exitCode=0 Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.046514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" event={"ID":"400bc5f6-6b87-4af8-9fa9-4429afb77168","Type":"ContainerDied","Data":"ae89f4c305523e61d101d43279454585023260213a304080d2aca16789a34766"} Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.048338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"211f2606-1d07-4c2d-8533-d53495a99d5b","Type":"ContainerStarted","Data":"8ef01ca3f58f73a9751bef19566c17b4caa1e7349c13e85c96e1e0530435e21a"} Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.048469 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.080867 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" podStartSLOduration=20.741058047 podStartE2EDuration="42.080847696s" podCreationTimestamp="2026-02-16 21:11:51 +0000 UTC" firstStartedPulling="2026-02-16 21:11:51.966070121 +0000 UTC m=+929.895366059" lastFinishedPulling="2026-02-16 21:12:13.30585977 +0000 UTC m=+951.235155708" observedRunningTime="2026-02-16 21:12:33.073593833 +0000 UTC m=+971.002889791" watchObservedRunningTime="2026-02-16 21:12:33.080847696 +0000 UTC m=+971.010143644" Feb 16 21:12:33 crc kubenswrapper[4811]: I0216 21:12:33.092325 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.733598065 podStartE2EDuration="39.092304194s" podCreationTimestamp="2026-02-16 21:11:54 +0000 UTC" firstStartedPulling="2026-02-16 21:12:13.801601388 +0000 UTC m=+951.730897326" lastFinishedPulling="2026-02-16 21:12:26.160307517 +0000 UTC m=+964.089603455" observedRunningTime="2026-02-16 21:12:33.089494043 +0000 UTC m=+971.018789981" watchObservedRunningTime="2026-02-16 21:12:33.092304194 +0000 UTC m=+971.021600132" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.064162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" event={"ID":"74ea8ac5-2a83-484e-b8bc-ddf8c7045e00","Type":"ContainerStarted","Data":"c0795d04436d9184ede93d2d45074b3da087de1247d2c935c47a0e8c809e1007"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.065010 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.067385 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"10a8f77b-e218-4975-9411-8c380eda2c5a","Type":"ContainerStarted","Data":"378f07d5624b6f72c63802d6b7da7fd98d1b1df39879935f4909a295603f8284"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.069644 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" event={"ID":"cb32d76e-7b43-4a6f-9d01-922be5156eec","Type":"ContainerStarted","Data":"4bb2b5900223f55a61d68b74fbdfe95710a33e46638a2fc6e987590dc77a4c5d"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.069857 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.072024 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" event={"ID":"a4296913-66bb-481c-a5a8-b667e191ae73","Type":"ContainerStarted","Data":"b15d443281699d2bcfaf1ca57d2bdbaede3352fdc48f054c9f1ef668d322623e"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.072576 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.074948 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8c25051-577c-41fd-a7af-fec64121e954","Type":"ContainerStarted","Data":"10bd60070c5a2d452386a3f5fa9ca43d785ccd75239adc2bfbcf26b1947e7423"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.077402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" event={"ID":"5557acf3-367b-4296-a944-d52fb4545738","Type":"ContainerStarted","Data":"8ac288962d55f9a4d8ae67a9ea9e006779bab9e3fd4bdc1b5ae692a72e3b5e3e"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.078494 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.082904 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"5f050753-85f4-413e-92b6-0503db5e7391","Type":"ContainerStarted","Data":"b0a101c47006d75ae9ce422aeecd16dd7f2cfe5b440f6575d1a1647ee3fe1e49"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.083401 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.088745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fktqj" event={"ID":"08f73916-0e3c-4ef7-97e7-a13b9923b620","Type":"ContainerStarted","Data":"50535847d48457b47b81d0b48a3faa99ef84dffb0c6335ba360f2d24c31dd174"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.101305 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4058dcf3-ddd9-4d4f-b909-9f7b0323c65a","Type":"ContainerStarted","Data":"8425d19f89c645305815ec5e092f3d3c785fe7a7f98f2684e2dac82e901d06d8"} Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.101374 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.104812 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" podStartSLOduration=9.326896269 podStartE2EDuration="26.104792328s" podCreationTimestamp="2026-02-16 21:12:08 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.206765538 +0000 UTC m=+953.136061476" lastFinishedPulling="2026-02-16 21:12:31.984661597 +0000 UTC m=+969.913957535" observedRunningTime="2026-02-16 21:12:34.104025029 +0000 UTC m=+972.033321007" watchObservedRunningTime="2026-02-16 21:12:34.104792328 +0000 UTC m=+972.034088306" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.142459 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.166569 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=8.344610195 podStartE2EDuration="25.166552891s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.151392076 +0000 UTC m=+953.080688014" lastFinishedPulling="2026-02-16 21:12:31.973334772 +0000 UTC m=+969.902630710" observedRunningTime="2026-02-16 21:12:34.155168575 +0000 UTC m=+972.084464573" watchObservedRunningTime="2026-02-16 21:12:34.166552891 +0000 UTC m=+972.095848829" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.177565 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" podStartSLOduration=-9223372011.677225 podStartE2EDuration="25.177550178s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.17581242 +0000 UTC m=+953.105108358" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:34.176308087 +0000 UTC m=+972.105604025" watchObservedRunningTime="2026-02-16 21:12:34.177550178 +0000 UTC m=+972.106846106" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.209944 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" podStartSLOduration=8.389464963 podStartE2EDuration="25.209927132s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.153393696 +0000 UTC m=+953.082689634" lastFinishedPulling="2026-02-16 21:12:31.973855865 +0000 UTC m=+969.903151803" observedRunningTime="2026-02-16 21:12:34.204821474 +0000 UTC m=+972.134117452" watchObservedRunningTime="2026-02-16 21:12:34.209927132 +0000 UTC m=+972.139223070" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.229654 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=8.531210538 podStartE2EDuration="25.229639498s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.150503173 +0000 UTC m=+953.079799111" lastFinishedPulling="2026-02-16 21:12:31.848932133 +0000 UTC m=+969.778228071" observedRunningTime="2026-02-16 21:12:34.225621987 +0000 UTC m=+972.154917945" watchObservedRunningTime="2026-02-16 21:12:34.229639498 +0000 UTC m=+972.158935436" Feb 16 21:12:34 crc kubenswrapper[4811]: I0216 21:12:34.249014 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-kx7gd" podStartSLOduration=8.655871183 podStartE2EDuration="25.248995345s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.153630722 +0000 UTC m=+953.082926660" lastFinishedPulling="2026-02-16 21:12:31.746754884 +0000 UTC m=+969.676050822" observedRunningTime="2026-02-16 21:12:34.242741937 +0000 UTC m=+972.172037875" watchObservedRunningTime="2026-02-16 21:12:34.248995345 +0000 UTC m=+972.178291283" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.116379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" event={"ID":"400bc5f6-6b87-4af8-9fa9-4429afb77168","Type":"ContainerStarted","Data":"23897c5b8c8d9b903a34cd3bd94533aec7e9e5a94f608337237a595bb2d7847b"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.117697 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.133164 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" event={"ID":"abc1ac44-b93b-4a99-af90-c0b9c9839e96","Type":"ContainerStarted","Data":"8a8b15275cdcf320307ca380ef0bfe7fa523da9eca3a6a139cbeb4f774959c3a"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.134137 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.143121 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" podStartSLOduration=-9223371991.711666 podStartE2EDuration="45.143109181s" podCreationTimestamp="2026-02-16 21:11:50 +0000 UTC" firstStartedPulling="2026-02-16 21:11:51.749791112 +0000 UTC m=+929.679087050" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:35.136972916 +0000 UTC m=+973.066268854" watchObservedRunningTime="2026-02-16 21:12:35.143109181 +0000 UTC m=+973.072405109" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.148658 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.153956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92bf7" event={"ID":"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf","Type":"ContainerStarted","Data":"c4b2a33a7979fa0c7dcc2571c66eb9d73f80ab77b417806f60e1c6a7e7a031dd"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.160933 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-pcfgn" podStartSLOduration=9.34046293 podStartE2EDuration="26.160917179s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.150973675 +0000 UTC m=+953.080269623" lastFinishedPulling="2026-02-16 21:12:31.971427914 +0000 UTC m=+969.900723872" observedRunningTime="2026-02-16 21:12:35.158799645 +0000 UTC m=+973.088095603" watchObservedRunningTime="2026-02-16 21:12:35.160917179 +0000 UTC m=+973.090213117" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.161945 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qhsfb" event={"ID":"b8edc00a-d032-460b-9e97-d784b4fdfe5c","Type":"ContainerStarted","Data":"53b889a49e343febc85a8d6a9aad595f2c303f37782f131c78f853b0d368eac1"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.162862 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qhsfb" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.172944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a6c69be-2c47-4bcd-906e-ab109340067b","Type":"ContainerStarted","Data":"4f4e18c527161d2d76397f705369f060854f2f687da67079a7cdb821fc94a999"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.174475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"1d41079d-f556-47e9-bc54-75dc6461451e","Type":"ContainerStarted","Data":"fc868ea4bae350cab3637c7949b9fc78504d4eb474788072c0e5fc8be8219ef3"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.183669 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.189820 4811 generic.go:334] "Generic (PLEG): container finished" podID="08f73916-0e3c-4ef7-97e7-a13b9923b620" containerID="50535847d48457b47b81d0b48a3faa99ef84dffb0c6335ba360f2d24c31dd174" exitCode=0 Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.189886 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fktqj" event={"ID":"08f73916-0e3c-4ef7-97e7-a13b9923b620","Type":"ContainerDied","Data":"50535847d48457b47b81d0b48a3faa99ef84dffb0c6335ba360f2d24c31dd174"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.193899 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32a12c18-c799-4092-8ba9-c89b2a5f713a","Type":"ContainerStarted","Data":"5d087f6699c20b8db867023b3aff03b96c37364a4bbc8b9ec6da79038aa0431d"} Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.334469 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=9.512057436 podStartE2EDuration="26.334447623s" podCreationTimestamp="2026-02-16 21:12:09 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.151077078 +0000 UTC m=+953.080373016" lastFinishedPulling="2026-02-16 21:12:31.973467265 +0000 UTC m=+969.902763203" observedRunningTime="2026-02-16 21:12:35.328131674 +0000 UTC m=+973.257427612" watchObservedRunningTime="2026-02-16 21:12:35.334447623 +0000 UTC m=+973.263743561" Feb 16 21:12:35 crc kubenswrapper[4811]: I0216 21:12:35.367040 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qhsfb" podStartSLOduration=17.605133176 podStartE2EDuration="34.367019542s" podCreationTimestamp="2026-02-16 21:12:01 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.175742858 +0000 UTC m=+953.105038796" lastFinishedPulling="2026-02-16 21:12:31.937629204 +0000 UTC m=+969.866925162" observedRunningTime="2026-02-16 21:12:35.356378944 +0000 UTC m=+973.285674912" watchObservedRunningTime="2026-02-16 21:12:35.367019542 +0000 UTC m=+973.296315470" Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.208943 4811 generic.go:334] "Generic (PLEG): container finished" podID="6fb34ae7-4d56-44b0-9db6-c890b1d57fdf" containerID="c4b2a33a7979fa0c7dcc2571c66eb9d73f80ab77b417806f60e1c6a7e7a031dd" exitCode=0 Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.209021 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92bf7" event={"ID":"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf","Type":"ContainerDied","Data":"c4b2a33a7979fa0c7dcc2571c66eb9d73f80ab77b417806f60e1c6a7e7a031dd"} Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.213064 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fktqj" event={"ID":"08f73916-0e3c-4ef7-97e7-a13b9923b620","Type":"ContainerStarted","Data":"6c525058cb51595f5ee4383161ec1a6d1f2331b50474dec593c6ee338be8ec8c"} Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.213126 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fktqj" event={"ID":"08f73916-0e3c-4ef7-97e7-a13b9923b620","Type":"ContainerStarted","Data":"b6a91e312c019c9e536b5ba94cec793932730f848bd190ef413cc0f565bcfef0"} Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.213210 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.219872 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"117fc5a2-d29b-4844-9dc6-4359d1c4c24d","Type":"ContainerStarted","Data":"0d1315480ff723e99a50ef586a26aa062913b902e284f9c578a839ff0527b228"} Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.222358 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerStarted","Data":"1ecbba720783e3c6c08a9da6626dfdffbc9cf13424e8958bd036af88a0d5c304"} Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.310474 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fktqj" podStartSLOduration=21.749408413 podStartE2EDuration="35.310449869s" podCreationTimestamp="2026-02-16 21:12:01 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.151133219 +0000 UTC m=+953.080429157" lastFinishedPulling="2026-02-16 21:12:28.712174645 +0000 UTC m=+966.641470613" observedRunningTime="2026-02-16 21:12:36.303089754 +0000 UTC m=+974.232385702" watchObservedRunningTime="2026-02-16 21:12:36.310449869 +0000 UTC m=+974.239745817" Feb 16 21:12:36 crc kubenswrapper[4811]: I0216 21:12:36.410630 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.234701 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8c25051-577c-41fd-a7af-fec64121e954","Type":"ContainerStarted","Data":"4d712ad39224bd96ad2b20e26f2d13f2bc089960b915a7cbe1a85f8e5c0acc25"} Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.238252 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a6c69be-2c47-4bcd-906e-ab109340067b","Type":"ContainerStarted","Data":"cb7de2ce8b349fc71d049bbaaa2e15e2bb55855e3ad0aa0c079ea1a214e751cc"} Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.270647 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.802470621 podStartE2EDuration="34.270614948s" podCreationTimestamp="2026-02-16 21:12:03 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.467101426 +0000 UTC m=+953.396397364" lastFinishedPulling="2026-02-16 21:12:36.935245753 +0000 UTC m=+974.864541691" observedRunningTime="2026-02-16 21:12:37.259077837 +0000 UTC m=+975.188373775" watchObservedRunningTime="2026-02-16 21:12:37.270614948 +0000 UTC m=+975.199910916" Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.633260 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.704405 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.746138 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.003514846 podStartE2EDuration="34.746111096s" podCreationTimestamp="2026-02-16 21:12:03 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.163393727 +0000 UTC m=+953.092689665" lastFinishedPulling="2026-02-16 21:12:36.905989977 +0000 UTC m=+974.835285915" observedRunningTime="2026-02-16 21:12:37.280467685 +0000 UTC m=+975.209763623" watchObservedRunningTime="2026-02-16 21:12:37.746111096 +0000 UTC m=+975.675407044" Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.829736 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:37 crc kubenswrapper[4811]: I0216 21:12:37.899821 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.254351 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92bf7" event={"ID":"6fb34ae7-4d56-44b0-9db6-c890b1d57fdf","Type":"ContainerStarted","Data":"92c143ccf3f24e4ea7a9a38a8b336c79a3d9981791489fa8eeb657f7e3d5ca00"} Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.256963 4811 generic.go:334] "Generic (PLEG): container finished" podID="10a8f77b-e218-4975-9411-8c380eda2c5a" containerID="378f07d5624b6f72c63802d6b7da7fd98d1b1df39879935f4909a295603f8284" exitCode=0 Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.257005 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"10a8f77b-e218-4975-9411-8c380eda2c5a","Type":"ContainerDied","Data":"378f07d5624b6f72c63802d6b7da7fd98d1b1df39879935f4909a295603f8284"} Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.258456 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.258487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.283076 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92bf7" podStartSLOduration=21.935349819 podStartE2EDuration="41.283055889s" podCreationTimestamp="2026-02-16 21:11:57 +0000 UTC" firstStartedPulling="2026-02-16 21:12:17.742800488 +0000 UTC m=+955.672096426" lastFinishedPulling="2026-02-16 21:12:37.090506558 +0000 UTC m=+975.019802496" observedRunningTime="2026-02-16 21:12:38.274367571 +0000 UTC m=+976.203663529" watchObservedRunningTime="2026-02-16 21:12:38.283055889 +0000 UTC m=+976.212351837" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.329753 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.654168 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncf7"] Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.654840 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerName="dnsmasq-dns" containerID="cri-o://23897c5b8c8d9b903a34cd3bd94533aec7e9e5a94f608337237a595bb2d7847b" gracePeriod=10 Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.700426 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zcd6z"] Feb 16 21:12:38 crc kubenswrapper[4811]: E0216 21:12:38.702454 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="registry-server" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.702638 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="registry-server" Feb 16 21:12:38 crc kubenswrapper[4811]: E0216 21:12:38.702753 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="extract-content" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.702845 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="extract-content" Feb 16 21:12:38 crc kubenswrapper[4811]: E0216 21:12:38.702929 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="extract-utilities" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.703016 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="extract-utilities" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.703336 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6100253-d183-48b3-bcdf-f193f07d42a1" containerName="registry-server" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.704964 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.707276 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.717055 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zcd6z"] Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.752268 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppwx\" (UniqueName: \"kubernetes.io/projected/c87cd6ce-730e-4107-8027-71b18ae4a0f7-kube-api-access-fppwx\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.752368 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-config\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.752399 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.752427 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.773332 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4xj7n"] Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.778826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.780835 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.794063 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4xj7n"] Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.857640 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-config\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.857708 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqdd\" (UniqueName: \"kubernetes.io/projected/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-kube-api-access-lpqdd\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.857786 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-ovs-rundir\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.857818 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-combined-ca-bundle\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.857870 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppwx\" (UniqueName: \"kubernetes.io/projected/c87cd6ce-730e-4107-8027-71b18ae4a0f7-kube-api-access-fppwx\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.857966 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-config\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.858007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.858047 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.858080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.858134 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-ovn-rundir\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.859265 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-config\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.859394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.859450 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.880118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppwx\" (UniqueName: \"kubernetes.io/projected/c87cd6ce-730e-4107-8027-71b18ae4a0f7-kube-api-access-fppwx\") pod \"dnsmasq-dns-5bf47b49b7-zcd6z\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.960758 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-ovs-rundir\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.961135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-combined-ca-bundle\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.961307 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.961378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-ovn-rundir\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.961665 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-config\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.961704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqdd\" (UniqueName: \"kubernetes.io/projected/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-kube-api-access-lpqdd\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.962368 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-ovs-rundir\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.962951 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-ovn-rundir\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.970540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.971255 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-config\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.973224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-combined-ca-bundle\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:38 crc kubenswrapper[4811]: I0216 21:12:38.980337 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqdd\" (UniqueName: \"kubernetes.io/projected/6d8a2432-c873-4ec8-9e02-aaf33ddd6d65-kube-api-access-lpqdd\") pod \"ovn-controller-metrics-4xj7n\" (UID: \"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65\") " pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.083003 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-94vfs"] Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.083238 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerName="dnsmasq-dns" containerID="cri-o://47cc52678d1e017e30cdb29dae3c2ea08815bb78597c07833d3ec61ef25b1bde" gracePeriod=10 Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.089517 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.108695 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.121155 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vvgkf"] Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.123202 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.134640 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.148001 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4xj7n" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.161926 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vvgkf"] Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.266918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.267207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-config\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.267248 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6747\" (UniqueName: \"kubernetes.io/projected/5713e95b-f062-47be-8f12-aaa23215b31a-kube-api-access-f6747\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.267309 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-dns-svc\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.267347 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.297519 4811 generic.go:334] "Generic (PLEG): container finished" podID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerID="23897c5b8c8d9b903a34cd3bd94533aec7e9e5a94f608337237a595bb2d7847b" exitCode=0 Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.297600 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" event={"ID":"400bc5f6-6b87-4af8-9fa9-4429afb77168","Type":"ContainerDied","Data":"23897c5b8c8d9b903a34cd3bd94533aec7e9e5a94f608337237a595bb2d7847b"} Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.299069 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"10a8f77b-e218-4975-9411-8c380eda2c5a","Type":"ContainerStarted","Data":"697d2218b610e3f52017d223ad7bcfeb4da3de845d1f11b9e0740c0a02aa7d87"} Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.301254 4811 generic.go:334] "Generic (PLEG): container finished" podID="32a12c18-c799-4092-8ba9-c89b2a5f713a" containerID="5d087f6699c20b8db867023b3aff03b96c37364a4bbc8b9ec6da79038aa0431d" exitCode=0 Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.301364 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32a12c18-c799-4092-8ba9-c89b2a5f713a","Type":"ContainerDied","Data":"5d087f6699c20b8db867023b3aff03b96c37364a4bbc8b9ec6da79038aa0431d"} Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.314220 4811 generic.go:334] "Generic (PLEG): container finished" podID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerID="47cc52678d1e017e30cdb29dae3c2ea08815bb78597c07833d3ec61ef25b1bde" exitCode=0 Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.314397 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" event={"ID":"285b4d00-7d22-44c0-8a35-6f076f3135a7","Type":"ContainerDied","Data":"47cc52678d1e017e30cdb29dae3c2ea08815bb78597c07833d3ec61ef25b1bde"} Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.368546 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.572145955 podStartE2EDuration="46.368524499s" podCreationTimestamp="2026-02-16 21:11:53 +0000 UTC" firstStartedPulling="2026-02-16 21:12:13.912427026 +0000 UTC m=+951.841722964" lastFinishedPulling="2026-02-16 21:12:28.70880553 +0000 UTC m=+966.638101508" observedRunningTime="2026-02-16 21:12:39.35267522 +0000 UTC m=+977.281971168" watchObservedRunningTime="2026-02-16 21:12:39.368524499 +0000 UTC m=+977.297820437" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.375918 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.376005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.376110 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-config\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.376154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6747\" (UniqueName: \"kubernetes.io/projected/5713e95b-f062-47be-8f12-aaa23215b31a-kube-api-access-f6747\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.376276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-dns-svc\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.378494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.379923 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-config\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.388583 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-dns-svc\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.398134 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.414964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6747\" (UniqueName: \"kubernetes.io/projected/5713e95b-f062-47be-8f12-aaa23215b31a-kube-api-access-f6747\") pod \"dnsmasq-dns-8554648995-vvgkf\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.426691 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.518796 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.604029 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.704783 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-config\") pod \"400bc5f6-6b87-4af8-9fa9-4429afb77168\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.705096 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl8ls\" (UniqueName: \"kubernetes.io/projected/400bc5f6-6b87-4af8-9fa9-4429afb77168-kube-api-access-jl8ls\") pod \"400bc5f6-6b87-4af8-9fa9-4429afb77168\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.705164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-dns-svc\") pod \"400bc5f6-6b87-4af8-9fa9-4429afb77168\" (UID: \"400bc5f6-6b87-4af8-9fa9-4429afb77168\") " Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.716614 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400bc5f6-6b87-4af8-9fa9-4429afb77168-kube-api-access-jl8ls" (OuterVolumeSpecName: "kube-api-access-jl8ls") pod "400bc5f6-6b87-4af8-9fa9-4429afb77168" (UID: "400bc5f6-6b87-4af8-9fa9-4429afb77168"). InnerVolumeSpecName "kube-api-access-jl8ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.744781 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.753746 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "400bc5f6-6b87-4af8-9fa9-4429afb77168" (UID: "400bc5f6-6b87-4af8-9fa9-4429afb77168"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.757184 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-config" (OuterVolumeSpecName: "config") pod "400bc5f6-6b87-4af8-9fa9-4429afb77168" (UID: "400bc5f6-6b87-4af8-9fa9-4429afb77168"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.809255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-dns-svc\") pod \"285b4d00-7d22-44c0-8a35-6f076f3135a7\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.809356 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-config\") pod \"285b4d00-7d22-44c0-8a35-6f076f3135a7\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.809455 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff2sm\" (UniqueName: \"kubernetes.io/projected/285b4d00-7d22-44c0-8a35-6f076f3135a7-kube-api-access-ff2sm\") pod \"285b4d00-7d22-44c0-8a35-6f076f3135a7\" (UID: \"285b4d00-7d22-44c0-8a35-6f076f3135a7\") " Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.809923 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl8ls\" (UniqueName: \"kubernetes.io/projected/400bc5f6-6b87-4af8-9fa9-4429afb77168-kube-api-access-jl8ls\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.809938 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.809952 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400bc5f6-6b87-4af8-9fa9-4429afb77168-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.815327 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285b4d00-7d22-44c0-8a35-6f076f3135a7-kube-api-access-ff2sm" (OuterVolumeSpecName: "kube-api-access-ff2sm") pod "285b4d00-7d22-44c0-8a35-6f076f3135a7" (UID: "285b4d00-7d22-44c0-8a35-6f076f3135a7"). InnerVolumeSpecName "kube-api-access-ff2sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.816675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:12:39 crc kubenswrapper[4811]: E0216 21:12:39.818388 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerName="init" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.818401 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerName="init" Feb 16 21:12:39 crc kubenswrapper[4811]: E0216 21:12:39.818422 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerName="dnsmasq-dns" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.818428 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerName="dnsmasq-dns" Feb 16 21:12:39 crc kubenswrapper[4811]: E0216 21:12:39.818441 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerName="init" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.818447 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerName="init" Feb 16 21:12:39 crc kubenswrapper[4811]: E0216 21:12:39.818461 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerName="dnsmasq-dns" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.818467 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerName="dnsmasq-dns" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.818621 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" containerName="dnsmasq-dns" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.818643 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" containerName="dnsmasq-dns" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.819527 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.829586 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.830071 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g8s8g" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.830430 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.831253 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.836798 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.868286 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-config" (OuterVolumeSpecName: "config") pod "285b4d00-7d22-44c0-8a35-6f076f3135a7" (UID: "285b4d00-7d22-44c0-8a35-6f076f3135a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.877569 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "285b4d00-7d22-44c0-8a35-6f076f3135a7" (UID: "285b4d00-7d22-44c0-8a35-6f076f3135a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.911518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.911594 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-scripts\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.911621 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.911640 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.911660 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-config\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.911919 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pj4\" (UniqueName: \"kubernetes.io/projected/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-kube-api-access-j8pj4\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.912006 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.912127 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.912143 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285b4d00-7d22-44c0-8a35-6f076f3135a7-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:39 crc kubenswrapper[4811]: I0216 21:12:39.912161 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff2sm\" (UniqueName: \"kubernetes.io/projected/285b4d00-7d22-44c0-8a35-6f076f3135a7-kube-api-access-ff2sm\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-scripts\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013573 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013598 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013622 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-config\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8pj4\" (UniqueName: \"kubernetes.io/projected/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-kube-api-access-j8pj4\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.013730 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.015395 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-scripts\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.015812 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.016359 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-config\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.017029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.017098 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.019938 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.032220 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8pj4\" (UniqueName: \"kubernetes.io/projected/f2fff999-08d2-426d-93a5-39ba9b2ad7ef-kube-api-access-j8pj4\") pod \"ovn-northd-0\" (UID: \"f2fff999-08d2-426d-93a5-39ba9b2ad7ef\") " pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.144291 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.146089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.175142 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zcd6z"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.194140 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4xj7n"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.227416 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vvgkf"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.352957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" event={"ID":"285b4d00-7d22-44c0-8a35-6f076f3135a7","Type":"ContainerDied","Data":"f3fcf4fdcdc8332229b71aba0d1c5258531ab1ce82794a78949d91287477bfa3"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.353006 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-94vfs" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.353012 4811 scope.go:117] "RemoveContainer" containerID="47cc52678d1e017e30cdb29dae3c2ea08815bb78597c07833d3ec61ef25b1bde" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.373474 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" event={"ID":"400bc5f6-6b87-4af8-9fa9-4429afb77168","Type":"ContainerDied","Data":"63d86e7c7025eeeee4043f03ef5e96097a80512fad50e854ce8e736f7f1dab16"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.373596 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncf7" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.415494 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" event={"ID":"c87cd6ce-730e-4107-8027-71b18ae4a0f7","Type":"ContainerStarted","Data":"b34dc31dd6033ae84e2a944eb25e12c2d30530a3983fa5dc0693850eef7fc0e4"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.456423 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4xj7n" event={"ID":"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65","Type":"ContainerStarted","Data":"27344ecac57415184defdd47e00ad01bf4ff59349b4e2d9e81a4ec6d7000771c"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.506797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32a12c18-c799-4092-8ba9-c89b2a5f713a","Type":"ContainerStarted","Data":"22d7ea48f5156bf46dde69784674db1a598e479a03b14491b546a8e47a1ebb12"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.518658 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vvgkf" event={"ID":"5713e95b-f062-47be-8f12-aaa23215b31a","Type":"ContainerStarted","Data":"bf093d318241e5440ded74b94f5e0a91419bd7a6d6cca003c934cfbe2e9a5a1f"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.519919 4811 generic.go:334] "Generic (PLEG): container finished" podID="117fc5a2-d29b-4844-9dc6-4359d1c4c24d" containerID="0d1315480ff723e99a50ef586a26aa062913b902e284f9c578a839ff0527b228" exitCode=0 Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.520833 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"117fc5a2-d29b-4844-9dc6-4359d1c4c24d","Type":"ContainerDied","Data":"0d1315480ff723e99a50ef586a26aa062913b902e284f9c578a839ff0527b228"} Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.540012 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=31.100465213 podStartE2EDuration="48.539994631s" podCreationTimestamp="2026-02-16 21:11:52 +0000 UTC" firstStartedPulling="2026-02-16 21:12:14.44062028 +0000 UTC m=+952.369916228" lastFinishedPulling="2026-02-16 21:12:31.880149698 +0000 UTC m=+969.809445646" observedRunningTime="2026-02-16 21:12:40.53674297 +0000 UTC m=+978.466038918" watchObservedRunningTime="2026-02-16 21:12:40.539994631 +0000 UTC m=+978.469290569" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.601571 4811 scope.go:117] "RemoveContainer" containerID="a8f079a1c0c0e91de914f46dfc3f7d3f9e53691ebe8160be093eebd4911a2a1a" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.627862 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-94vfs"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.653182 4811 scope.go:117] "RemoveContainer" containerID="23897c5b8c8d9b903a34cd3bd94533aec7e9e5a94f608337237a595bb2d7847b" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.658938 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-94vfs"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.666120 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncf7"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.673875 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncf7"] Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.724586 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285b4d00-7d22-44c0-8a35-6f076f3135a7" path="/var/lib/kubelet/pods/285b4d00-7d22-44c0-8a35-6f076f3135a7/volumes" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.725671 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400bc5f6-6b87-4af8-9fa9-4429afb77168" path="/var/lib/kubelet/pods/400bc5f6-6b87-4af8-9fa9-4429afb77168/volumes" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.780424 4811 scope.go:117] "RemoveContainer" containerID="ae89f4c305523e61d101d43279454585023260213a304080d2aca16789a34766" Feb 16 21:12:40 crc kubenswrapper[4811]: I0216 21:12:40.922289 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:12:40 crc kubenswrapper[4811]: W0216 21:12:40.969172 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2fff999_08d2_426d_93a5_39ba9b2ad7ef.slice/crio-9ec40e247482fc77823c6c4b13e9c1ee5daa1f7bc44f974dc64ae8d36eb72151 WatchSource:0}: Error finding container 9ec40e247482fc77823c6c4b13e9c1ee5daa1f7bc44f974dc64ae8d36eb72151: Status 404 returned error can't find the container with id 9ec40e247482fc77823c6c4b13e9c1ee5daa1f7bc44f974dc64ae8d36eb72151 Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.556852 4811 generic.go:334] "Generic (PLEG): container finished" podID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerID="1ecbba720783e3c6c08a9da6626dfdffbc9cf13424e8958bd036af88a0d5c304" exitCode=0 Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.556927 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerDied","Data":"1ecbba720783e3c6c08a9da6626dfdffbc9cf13424e8958bd036af88a0d5c304"} Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.562387 4811 generic.go:334] "Generic (PLEG): container finished" podID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerID="d00f3a6b0041d6ddb54a5b85d8c50cb1344c2a001fb0ed8183cb25f9c53d6a9a" exitCode=0 Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.563055 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" event={"ID":"c87cd6ce-730e-4107-8027-71b18ae4a0f7","Type":"ContainerDied","Data":"d00f3a6b0041d6ddb54a5b85d8c50cb1344c2a001fb0ed8183cb25f9c53d6a9a"} Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.565957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4xj7n" event={"ID":"6d8a2432-c873-4ec8-9e02-aaf33ddd6d65","Type":"ContainerStarted","Data":"da2fa2b7c53e8cb2efe86b7c9940736118a41cea3548ba8ee26c5695b8f5d985"} Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.568655 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ffc95bb9-a405-4472-9879-f2dc826ffdb9","Type":"ContainerStarted","Data":"8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8"} Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.568870 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.570167 4811 generic.go:334] "Generic (PLEG): container finished" podID="5713e95b-f062-47be-8f12-aaa23215b31a" containerID="7c84ca9cd626fd20df89c005af3ffb7c2dab114d5e7edc6ffe05dbf05c4771b1" exitCode=0 Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.570310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vvgkf" event={"ID":"5713e95b-f062-47be-8f12-aaa23215b31a","Type":"ContainerDied","Data":"7c84ca9cd626fd20df89c005af3ffb7c2dab114d5e7edc6ffe05dbf05c4771b1"} Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.573287 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f2fff999-08d2-426d-93a5-39ba9b2ad7ef","Type":"ContainerStarted","Data":"9ec40e247482fc77823c6c4b13e9c1ee5daa1f7bc44f974dc64ae8d36eb72151"} Feb 16 21:12:41 crc kubenswrapper[4811]: I0216 21:12:41.603960 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.568216508 podStartE2EDuration="45.603919488s" podCreationTimestamp="2026-02-16 21:11:56 +0000 UTC" firstStartedPulling="2026-02-16 21:12:13.107501292 +0000 UTC m=+951.036797230" lastFinishedPulling="2026-02-16 21:12:40.143204272 +0000 UTC m=+978.072500210" observedRunningTime="2026-02-16 21:12:41.593802154 +0000 UTC m=+979.523098092" watchObservedRunningTime="2026-02-16 21:12:41.603919488 +0000 UTC m=+979.533215426" Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.624723 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" event={"ID":"c87cd6ce-730e-4107-8027-71b18ae4a0f7","Type":"ContainerStarted","Data":"654e415d708ad8e6609bbcacc0d66ed4cd3d39ad6206eb8917fa9dcbee824631"} Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.625333 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.642518 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vvgkf" event={"ID":"5713e95b-f062-47be-8f12-aaa23215b31a","Type":"ContainerStarted","Data":"702d9069293a76dee0b0b722e45dfda0ff2f646ff4908031403c179a4eb1b4a2"} Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.642582 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.652820 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f2fff999-08d2-426d-93a5-39ba9b2ad7ef","Type":"ContainerStarted","Data":"2aa429861cebe35c0058c9e2d81a705c3904d4278461bccc2adfeebb03481c21"} Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.661975 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" podStartSLOduration=4.661953858 podStartE2EDuration="4.661953858s" podCreationTimestamp="2026-02-16 21:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:42.660294446 +0000 UTC m=+980.589590404" watchObservedRunningTime="2026-02-16 21:12:42.661953858 +0000 UTC m=+980.591249786" Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.665136 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4xj7n" podStartSLOduration=4.665125537 podStartE2EDuration="4.665125537s" podCreationTimestamp="2026-02-16 21:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:41.660600564 +0000 UTC m=+979.589896512" watchObservedRunningTime="2026-02-16 21:12:42.665125537 +0000 UTC m=+980.594421475" Feb 16 21:12:42 crc kubenswrapper[4811]: I0216 21:12:42.775949 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vvgkf" podStartSLOduration=3.775928244 podStartE2EDuration="3.775928244s" podCreationTimestamp="2026-02-16 21:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:42.689415518 +0000 UTC m=+980.618711456" watchObservedRunningTime="2026-02-16 21:12:42.775928244 +0000 UTC m=+980.705224182" Feb 16 21:12:43 crc kubenswrapper[4811]: I0216 21:12:43.666165 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f2fff999-08d2-426d-93a5-39ba9b2ad7ef","Type":"ContainerStarted","Data":"e2933e6f47716e883ad1f781ebe4565c4445a557b9c8cbb7eaac41cf4ae7a709"} Feb 16 21:12:43 crc kubenswrapper[4811]: I0216 21:12:43.666573 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 21:12:43 crc kubenswrapper[4811]: I0216 21:12:43.692460 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.433708848 podStartE2EDuration="4.692441724s" podCreationTimestamp="2026-02-16 21:12:39 +0000 UTC" firstStartedPulling="2026-02-16 21:12:40.971122964 +0000 UTC m=+978.900418902" lastFinishedPulling="2026-02-16 21:12:42.22985585 +0000 UTC m=+980.159151778" observedRunningTime="2026-02-16 21:12:43.689571782 +0000 UTC m=+981.618867800" watchObservedRunningTime="2026-02-16 21:12:43.692441724 +0000 UTC m=+981.621737662" Feb 16 21:12:43 crc kubenswrapper[4811]: I0216 21:12:43.959281 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 21:12:43 crc kubenswrapper[4811]: I0216 21:12:43.959623 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 21:12:44 crc kubenswrapper[4811]: I0216 21:12:44.800124 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 21:12:45 crc kubenswrapper[4811]: I0216 21:12:45.107818 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 21:12:45 crc kubenswrapper[4811]: I0216 21:12:45.108216 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 21:12:45 crc kubenswrapper[4811]: I0216 21:12:45.683134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"117fc5a2-d29b-4844-9dc6-4359d1c4c24d","Type":"ContainerStarted","Data":"d526ac20668dc8883a2d59ba55bec3971bbfda2e44fe5154567854af5e3b1123"} Feb 16 21:12:45 crc kubenswrapper[4811]: I0216 21:12:45.762612 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.116858 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.222559 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1afa-account-create-update-nl8r4"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.224032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.226764 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.233624 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1afa-account-create-update-nl8r4"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.253289 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mstfh"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.254773 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.277212 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mstfh"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.282670 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.378216 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffsg\" (UniqueName: \"kubernetes.io/projected/f556f9d0-3444-46b3-b435-dcf08cf76c0c-kube-api-access-2ffsg\") pod \"keystone-1afa-account-create-update-nl8r4\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.378333 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxzk\" (UniqueName: \"kubernetes.io/projected/48b8148b-cf17-4592-8583-edb4ccedca18-kube-api-access-wdxzk\") pod \"keystone-db-create-mstfh\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.378368 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b8148b-cf17-4592-8583-edb4ccedca18-operator-scripts\") pod \"keystone-db-create-mstfh\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.378433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f556f9d0-3444-46b3-b435-dcf08cf76c0c-operator-scripts\") pod \"keystone-1afa-account-create-update-nl8r4\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.414580 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lmvpb"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.415725 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.437131 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lmvpb"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.455660 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6610-account-create-update-brzlq"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.456822 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.465158 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.487615 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6610-account-create-update-brzlq"] Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.489295 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f556f9d0-3444-46b3-b435-dcf08cf76c0c-operator-scripts\") pod \"keystone-1afa-account-create-update-nl8r4\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.489584 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ffsg\" (UniqueName: \"kubernetes.io/projected/f556f9d0-3444-46b3-b435-dcf08cf76c0c-kube-api-access-2ffsg\") pod \"keystone-1afa-account-create-update-nl8r4\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.490617 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdxzk\" (UniqueName: \"kubernetes.io/projected/48b8148b-cf17-4592-8583-edb4ccedca18-kube-api-access-wdxzk\") pod \"keystone-db-create-mstfh\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.490653 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b8148b-cf17-4592-8583-edb4ccedca18-operator-scripts\") pod \"keystone-db-create-mstfh\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.491568 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f556f9d0-3444-46b3-b435-dcf08cf76c0c-operator-scripts\") pod \"keystone-1afa-account-create-update-nl8r4\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.491748 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b8148b-cf17-4592-8583-edb4ccedca18-operator-scripts\") pod \"keystone-db-create-mstfh\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.559445 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ffsg\" (UniqueName: \"kubernetes.io/projected/f556f9d0-3444-46b3-b435-dcf08cf76c0c-kube-api-access-2ffsg\") pod \"keystone-1afa-account-create-update-nl8r4\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.561485 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdxzk\" (UniqueName: \"kubernetes.io/projected/48b8148b-cf17-4592-8583-edb4ccedca18-kube-api-access-wdxzk\") pod \"keystone-db-create-mstfh\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.574297 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.592550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrzx\" (UniqueName: \"kubernetes.io/projected/15286204-6ffc-4f13-aacb-8c231edf893d-kube-api-access-7rrzx\") pod \"placement-db-create-lmvpb\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.592623 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15286204-6ffc-4f13-aacb-8c231edf893d-operator-scripts\") pod \"placement-db-create-lmvpb\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.592729 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26mg2\" (UniqueName: \"kubernetes.io/projected/eebc5893-8007-4da8-8e04-9c54d1a7b57c-kube-api-access-26mg2\") pod \"placement-6610-account-create-update-brzlq\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.592755 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebc5893-8007-4da8-8e04-9c54d1a7b57c-operator-scripts\") pod \"placement-6610-account-create-update-brzlq\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.693942 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26mg2\" (UniqueName: \"kubernetes.io/projected/eebc5893-8007-4da8-8e04-9c54d1a7b57c-kube-api-access-26mg2\") pod \"placement-6610-account-create-update-brzlq\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.693997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebc5893-8007-4da8-8e04-9c54d1a7b57c-operator-scripts\") pod \"placement-6610-account-create-update-brzlq\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.694072 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrzx\" (UniqueName: \"kubernetes.io/projected/15286204-6ffc-4f13-aacb-8c231edf893d-kube-api-access-7rrzx\") pod \"placement-db-create-lmvpb\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.694128 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15286204-6ffc-4f13-aacb-8c231edf893d-operator-scripts\") pod \"placement-db-create-lmvpb\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.694928 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebc5893-8007-4da8-8e04-9c54d1a7b57c-operator-scripts\") pod \"placement-6610-account-create-update-brzlq\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.695061 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15286204-6ffc-4f13-aacb-8c231edf893d-operator-scripts\") pod \"placement-db-create-lmvpb\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.709804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrzx\" (UniqueName: \"kubernetes.io/projected/15286204-6ffc-4f13-aacb-8c231edf893d-kube-api-access-7rrzx\") pod \"placement-db-create-lmvpb\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.712483 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26mg2\" (UniqueName: \"kubernetes.io/projected/eebc5893-8007-4da8-8e04-9c54d1a7b57c-kube-api-access-26mg2\") pod \"placement-6610-account-create-update-brzlq\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.740077 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.779306 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:46 crc kubenswrapper[4811]: I0216 21:12:46.843935 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.280111 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zcd6z"] Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.280538 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="dnsmasq-dns" containerID="cri-o://654e415d708ad8e6609bbcacc0d66ed4cd3d39ad6206eb8917fa9dcbee824631" gracePeriod=10 Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.286026 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.313522 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.339112 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cdcch"] Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.342245 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.385923 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cdcch"] Feb 16 21:12:47 crc kubenswrapper[4811]: E0216 21:12:47.459094 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc87cd6ce_730e_4107_8027_71b18ae4a0f7.slice/crio-conmon-654e415d708ad8e6609bbcacc0d66ed4cd3d39ad6206eb8917fa9dcbee824631.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.512159 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.512216 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.512458 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-config\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.512499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.512559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9sw2\" (UniqueName: \"kubernetes.io/projected/4bf21953-4d87-4a23-a09a-454e12365b71-kube-api-access-g9sw2\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.579526 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.579894 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.614254 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9sw2\" (UniqueName: \"kubernetes.io/projected/4bf21953-4d87-4a23-a09a-454e12365b71-kube-api-access-g9sw2\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.614370 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.614398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.614446 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-config\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.614474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.615269 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.616140 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.616659 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.617226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-config\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.661001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9sw2\" (UniqueName: \"kubernetes.io/projected/4bf21953-4d87-4a23-a09a-454e12365b71-kube-api-access-g9sw2\") pod \"dnsmasq-dns-b8fbc5445-cdcch\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.666702 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.727170 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.749542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"117fc5a2-d29b-4844-9dc6-4359d1c4c24d","Type":"ContainerStarted","Data":"da9687faaa76bf921a8b46e1f2647d8496f4380947409984678ce891d4d1744e"} Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.750378 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.755390 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.756297 4811 generic.go:334] "Generic (PLEG): container finished" podID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerID="654e415d708ad8e6609bbcacc0d66ed4cd3d39ad6206eb8917fa9dcbee824631" exitCode=0 Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.756341 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" event={"ID":"c87cd6ce-730e-4107-8027-71b18ae4a0f7","Type":"ContainerDied","Data":"654e415d708ad8e6609bbcacc0d66ed4cd3d39ad6206eb8917fa9dcbee824631"} Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.759492 4811 generic.go:334] "Generic (PLEG): container finished" podID="cd541633-15e7-4a12-99a4-72637521386d" containerID="7caad1429ce704c8fb74cbbcdef94962dcaecef310a578dd1035d4bc6a9d0f1c" exitCode=0 Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.759606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cd541633-15e7-4a12-99a4-72637521386d","Type":"ContainerDied","Data":"7caad1429ce704c8fb74cbbcdef94962dcaecef310a578dd1035d4bc6a9d0f1c"} Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.771420 4811 generic.go:334] "Generic (PLEG): container finished" podID="40263486-d6cd-4aa0-9570-affea970096f" containerID="f36361c6eae49f5925bc5dae3a142e7b838b079bb69edbe0c88c890b5bf9d97f" exitCode=0 Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.772288 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40263486-d6cd-4aa0-9570-affea970096f","Type":"ContainerDied","Data":"f36361c6eae49f5925bc5dae3a142e7b838b079bb69edbe0c88c890b5bf9d97f"} Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.841889 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92bf7" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.857216 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=20.774503515 podStartE2EDuration="50.857182616s" podCreationTimestamp="2026-02-16 21:11:57 +0000 UTC" firstStartedPulling="2026-02-16 21:12:14.415624271 +0000 UTC m=+952.344920229" lastFinishedPulling="2026-02-16 21:12:44.498303392 +0000 UTC m=+982.427599330" observedRunningTime="2026-02-16 21:12:47.84618186 +0000 UTC m=+985.775477808" watchObservedRunningTime="2026-02-16 21:12:47.857182616 +0000 UTC m=+985.786478554" Feb 16 21:12:47 crc kubenswrapper[4811]: I0216 21:12:47.997418 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92bf7"] Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.050451 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s8hk9"] Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.050744 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s8hk9" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="registry-server" containerID="cri-o://97b41f6e05256b35e8a212c24d609dd7050d44035be3cdfc3bf6f70866dc16f8" gracePeriod=2 Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.396341 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.403771 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.405984 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-wjwt7" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.406786 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.407038 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.416647 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.417759 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.529706 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3590443c-c5fd-4eec-a144-06cddd956651-cache\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.529749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbvj5\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-kube-api-access-sbvj5\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.529917 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.530141 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3590443c-c5fd-4eec-a144-06cddd956651-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.530205 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3590443c-c5fd-4eec-a144-06cddd956651-lock\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.530271 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.631593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.632319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3590443c-c5fd-4eec-a144-06cddd956651-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.632433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3590443c-c5fd-4eec-a144-06cddd956651-lock\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.632522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.632623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3590443c-c5fd-4eec-a144-06cddd956651-cache\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.632711 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbvj5\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-kube-api-access-sbvj5\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: E0216 21:12:48.633098 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:12:48 crc kubenswrapper[4811]: E0216 21:12:48.633184 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:12:48 crc kubenswrapper[4811]: E0216 21:12:48.633299 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift podName:3590443c-c5fd-4eec-a144-06cddd956651 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:49.133286764 +0000 UTC m=+987.062582702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift") pod "swift-storage-0" (UID: "3590443c-c5fd-4eec-a144-06cddd956651") : configmap "swift-ring-files" not found Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.633501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3590443c-c5fd-4eec-a144-06cddd956651-lock\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.633835 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3590443c-c5fd-4eec-a144-06cddd956651-cache\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.643073 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.643303 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/646377412c15100fbed788e0cef399f091072d48927c74f8af3f9bfd310d12fc/globalmount\"" pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.652486 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3590443c-c5fd-4eec-a144-06cddd956651-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.686092 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbvj5\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-kube-api-access-sbvj5\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.762810 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9461027-ae53-49ea-b5ca-25362cdc2711\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.784890 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerID="97b41f6e05256b35e8a212c24d609dd7050d44035be3cdfc3bf6f70866dc16f8" exitCode=0 Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.785134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerDied","Data":"97b41f6e05256b35e8a212c24d609dd7050d44035be3cdfc3bf6f70866dc16f8"} Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.977976 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7dnxf"] Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.979120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.985504 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.985644 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.985712 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 21:12:48 crc kubenswrapper[4811]: I0216 21:12:48.989787 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7dnxf"] Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.110903 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.141713 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-combined-ca-bundle\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142031 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps6w\" (UniqueName: \"kubernetes.io/projected/8b6c7641-19e7-4831-82d4-8eda499301b7-kube-api-access-bps6w\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-ring-data-devices\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-dispersionconf\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-swiftconf\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b6c7641-19e7-4831-82d4-8eda499301b7-etc-swift\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.142845 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-scripts\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: E0216 21:12:49.142972 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:12:49 crc kubenswrapper[4811]: E0216 21:12:49.143012 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:12:49 crc kubenswrapper[4811]: E0216 21:12:49.143083 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift podName:3590443c-c5fd-4eec-a144-06cddd956651 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:50.143061265 +0000 UTC m=+988.072357203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift") pod "swift-storage-0" (UID: "3590443c-c5fd-4eec-a144-06cddd956651") : configmap "swift-ring-files" not found Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.244257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-swiftconf\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.244322 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b6c7641-19e7-4831-82d4-8eda499301b7-etc-swift\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.244373 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-scripts\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.244908 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b6c7641-19e7-4831-82d4-8eda499301b7-etc-swift\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.244974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-combined-ca-bundle\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.245006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bps6w\" (UniqueName: \"kubernetes.io/projected/8b6c7641-19e7-4831-82d4-8eda499301b7-kube-api-access-bps6w\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.245461 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-ring-data-devices\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.245480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-dispersionconf\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.245767 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-scripts\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.246275 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-ring-data-devices\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.248415 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-swiftconf\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.248797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-combined-ca-bundle\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.249137 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-dispersionconf\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.265416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bps6w\" (UniqueName: \"kubernetes.io/projected/8b6c7641-19e7-4831-82d4-8eda499301b7-kube-api-access-bps6w\") pod \"swift-ring-rebalance-7dnxf\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.337878 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.355103 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-68wjj" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.522387 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.654493 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-jld6l" Feb 16 21:12:49 crc kubenswrapper[4811]: I0216 21:12:49.659611 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.163065 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:50 crc kubenswrapper[4811]: E0216 21:12:50.163290 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:12:50 crc kubenswrapper[4811]: E0216 21:12:50.163499 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:12:50 crc kubenswrapper[4811]: E0216 21:12:50.163551 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift podName:3590443c-c5fd-4eec-a144-06cddd956651 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:52.16353483 +0000 UTC m=+990.092830768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift") pod "swift-storage-0" (UID: "3590443c-c5fd-4eec-a144-06cddd956651") : configmap "swift-ring-files" not found Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.369403 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xw259"] Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.376274 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.420318 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xw259"] Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.475143 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dslq2\" (UniqueName: \"kubernetes.io/projected/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-kube-api-access-dslq2\") pod \"glance-db-create-xw259\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.475185 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-operator-scripts\") pod \"glance-db-create-xw259\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.573529 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4b4c-account-create-update-q54gf"] Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.574586 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.576976 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dslq2\" (UniqueName: \"kubernetes.io/projected/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-kube-api-access-dslq2\") pod \"glance-db-create-xw259\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.577019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-operator-scripts\") pod \"glance-db-create-xw259\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.578078 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-operator-scripts\") pod \"glance-db-create-xw259\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.580797 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.588808 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4b4c-account-create-update-q54gf"] Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.618887 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.629693 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dslq2\" (UniqueName: \"kubernetes.io/projected/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-kube-api-access-dslq2\") pod \"glance-db-create-xw259\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.678534 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzddt\" (UniqueName: \"kubernetes.io/projected/4aed10ff-a730-4ac8-88c7-395a71b9554b-kube-api-access-wzddt\") pod \"glance-4b4c-account-create-update-q54gf\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.678679 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aed10ff-a730-4ac8-88c7-395a71b9554b-operator-scripts\") pod \"glance-4b4c-account-create-update-q54gf\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.712940 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xw259" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.780380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aed10ff-a730-4ac8-88c7-395a71b9554b-operator-scripts\") pod \"glance-4b4c-account-create-update-q54gf\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.780483 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzddt\" (UniqueName: \"kubernetes.io/projected/4aed10ff-a730-4ac8-88c7-395a71b9554b-kube-api-access-wzddt\") pod \"glance-4b4c-account-create-update-q54gf\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.781631 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aed10ff-a730-4ac8-88c7-395a71b9554b-operator-scripts\") pod \"glance-4b4c-account-create-update-q54gf\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.799534 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzddt\" (UniqueName: \"kubernetes.io/projected/4aed10ff-a730-4ac8-88c7-395a71b9554b-kube-api-access-wzddt\") pod \"glance-4b4c-account-create-update-q54gf\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.847014 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="5f050753-85f4-413e-92b6-0503db5e7391" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:12:50 crc kubenswrapper[4811]: I0216 21:12:50.893859 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:51 crc kubenswrapper[4811]: I0216 21:12:51.003099 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.209647 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.210240 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.210257 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.210297 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift podName:3590443c-c5fd-4eec-a144-06cddd956651 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:56.210283435 +0000 UTC m=+994.139579373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift") pod "swift-storage-0" (UID: "3590443c-c5fd-4eec-a144-06cddd956651") : configmap "swift-ring-files" not found Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.266943 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.349574 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416135 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-utilities\") pod \"6c5c0388-6acf-443c-9db5-486defcdeacd\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416230 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fppwx\" (UniqueName: \"kubernetes.io/projected/c87cd6ce-730e-4107-8027-71b18ae4a0f7-kube-api-access-fppwx\") pod \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416280 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-catalog-content\") pod \"6c5c0388-6acf-443c-9db5-486defcdeacd\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416366 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-config\") pod \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416393 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-dns-svc\") pod \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416514 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-ovsdbserver-nb\") pod \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\" (UID: \"c87cd6ce-730e-4107-8027-71b18ae4a0f7\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.416546 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rh9\" (UniqueName: \"kubernetes.io/projected/6c5c0388-6acf-443c-9db5-486defcdeacd-kube-api-access-67rh9\") pod \"6c5c0388-6acf-443c-9db5-486defcdeacd\" (UID: \"6c5c0388-6acf-443c-9db5-486defcdeacd\") " Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.418324 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-utilities" (OuterVolumeSpecName: "utilities") pod "6c5c0388-6acf-443c-9db5-486defcdeacd" (UID: "6c5c0388-6acf-443c-9db5-486defcdeacd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.428370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87cd6ce-730e-4107-8027-71b18ae4a0f7-kube-api-access-fppwx" (OuterVolumeSpecName: "kube-api-access-fppwx") pod "c87cd6ce-730e-4107-8027-71b18ae4a0f7" (UID: "c87cd6ce-730e-4107-8027-71b18ae4a0f7"). InnerVolumeSpecName "kube-api-access-fppwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.444350 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5c0388-6acf-443c-9db5-486defcdeacd-kube-api-access-67rh9" (OuterVolumeSpecName: "kube-api-access-67rh9") pod "6c5c0388-6acf-443c-9db5-486defcdeacd" (UID: "6c5c0388-6acf-443c-9db5-486defcdeacd"). InnerVolumeSpecName "kube-api-access-67rh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.470133 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c87cd6ce-730e-4107-8027-71b18ae4a0f7" (UID: "c87cd6ce-730e-4107-8027-71b18ae4a0f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.494775 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-config" (OuterVolumeSpecName: "config") pod "c87cd6ce-730e-4107-8027-71b18ae4a0f7" (UID: "c87cd6ce-730e-4107-8027-71b18ae4a0f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.494846 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c87cd6ce-730e-4107-8027-71b18ae4a0f7" (UID: "c87cd6ce-730e-4107-8027-71b18ae4a0f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.515979 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c5c0388-6acf-443c-9db5-486defcdeacd" (UID: "6c5c0388-6acf-443c-9db5-486defcdeacd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.521705 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fppwx\" (UniqueName: \"kubernetes.io/projected/c87cd6ce-730e-4107-8027-71b18ae4a0f7-kube-api-access-fppwx\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.521938 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.522014 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.522168 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.522250 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c87cd6ce-730e-4107-8027-71b18ae4a0f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.522323 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rh9\" (UniqueName: \"kubernetes.io/projected/6c5c0388-6acf-443c-9db5-486defcdeacd-kube-api-access-67rh9\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.522392 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5c0388-6acf-443c-9db5-486defcdeacd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.557879 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-826wp"] Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.559441 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="extract-utilities" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.559577 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="extract-utilities" Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.560644 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="registry-server" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.560708 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="registry-server" Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.560785 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="init" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.560841 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="init" Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.560915 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="extract-content" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.560981 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="extract-content" Feb 16 21:12:52 crc kubenswrapper[4811]: E0216 21:12:52.561057 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="dnsmasq-dns" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.561119 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="dnsmasq-dns" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.561393 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" containerName="dnsmasq-dns" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.561483 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" containerName="registry-server" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.565012 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.573710 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.575363 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-826wp"] Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.623954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x48dk\" (UniqueName: \"kubernetes.io/projected/6bf24fbe-b1bb-411b-b042-52ec9afefaec-kube-api-access-x48dk\") pod \"root-account-create-update-826wp\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.624006 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf24fbe-b1bb-411b-b042-52ec9afefaec-operator-scripts\") pod \"root-account-create-update-826wp\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.725731 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x48dk\" (UniqueName: \"kubernetes.io/projected/6bf24fbe-b1bb-411b-b042-52ec9afefaec-kube-api-access-x48dk\") pod \"root-account-create-update-826wp\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.725768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf24fbe-b1bb-411b-b042-52ec9afefaec-operator-scripts\") pod \"root-account-create-update-826wp\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.726560 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf24fbe-b1bb-411b-b042-52ec9afefaec-operator-scripts\") pod \"root-account-create-update-826wp\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.766245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x48dk\" (UniqueName: \"kubernetes.io/projected/6bf24fbe-b1bb-411b-b042-52ec9afefaec-kube-api-access-x48dk\") pod \"root-account-create-update-826wp\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.830426 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerStarted","Data":"93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf"} Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.832250 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" event={"ID":"c87cd6ce-730e-4107-8027-71b18ae4a0f7","Type":"ContainerDied","Data":"b34dc31dd6033ae84e2a944eb25e12c2d30530a3983fa5dc0693850eef7fc0e4"} Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.832283 4811 scope.go:117] "RemoveContainer" containerID="654e415d708ad8e6609bbcacc0d66ed4cd3d39ad6206eb8917fa9dcbee824631" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.832396 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zcd6z" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.847735 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cd541633-15e7-4a12-99a4-72637521386d","Type":"ContainerStarted","Data":"c624a48c3b1de4601285d9cea4b856ff43606a23ed7210c487dbac68ae25da1a"} Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.848264 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.850858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40263486-d6cd-4aa0-9570-affea970096f","Type":"ContainerStarted","Data":"46f025c4f64cf63979a563a448d088ae4730f94b9c0f08de2ed46121786af5c3"} Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.853607 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.879648 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s8hk9" event={"ID":"6c5c0388-6acf-443c-9db5-486defcdeacd","Type":"ContainerDied","Data":"374b941827a4b5206a9fd1a58b222577e89cda91e65cb1f867326a1599ef6c3f"} Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.880018 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s8hk9" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.890439 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-826wp" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.902024 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371973.952774 podStartE2EDuration="1m2.902002282s" podCreationTimestamp="2026-02-16 21:11:50 +0000 UTC" firstStartedPulling="2026-02-16 21:11:53.153096155 +0000 UTC m=+931.082392103" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:52.888382209 +0000 UTC m=+990.817678167" watchObservedRunningTime="2026-02-16 21:12:52.902002282 +0000 UTC m=+990.831298220" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.909884 4811 scope.go:117] "RemoveContainer" containerID="d00f3a6b0041d6ddb54a5b85d8c50cb1344c2a001fb0ed8183cb25f9c53d6a9a" Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.933620 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zcd6z"] Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.944425 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zcd6z"] Feb 16 21:12:52 crc kubenswrapper[4811]: I0216 21:12:52.996029 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6610-account-create-update-brzlq"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.013902 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mstfh"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.022444 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.465983499000004 podStartE2EDuration="1m2.02242488s" podCreationTimestamp="2026-02-16 21:11:51 +0000 UTC" firstStartedPulling="2026-02-16 21:11:58.800955606 +0000 UTC m=+936.730251554" lastFinishedPulling="2026-02-16 21:12:13.357396997 +0000 UTC m=+951.286692935" observedRunningTime="2026-02-16 21:12:52.988874486 +0000 UTC m=+990.918170424" watchObservedRunningTime="2026-02-16 21:12:53.02242488 +0000 UTC m=+990.951720818" Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.052051 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s8hk9"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.056241 4811 scope.go:117] "RemoveContainer" containerID="97b41f6e05256b35e8a212c24d609dd7050d44035be3cdfc3bf6f70866dc16f8" Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.058717 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s8hk9"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.074300 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7dnxf"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.117682 4811 scope.go:117] "RemoveContainer" containerID="eeb95fd63d07343de9a89cc7212a8b33d0bad90532c928e081f248fb7a360aa0" Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.179381 4811 scope.go:117] "RemoveContainer" containerID="addf7e45d8451283d1051a8552ffc90e5ba53ea4ad6ac28637e763d06b8f4995" Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.230856 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lmvpb"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.264473 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4b4c-account-create-update-q54gf"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.275168 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cdcch"] Feb 16 21:12:53 crc kubenswrapper[4811]: W0216 21:12:53.319709 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bf21953_4d87_4a23_a09a_454e12365b71.slice/crio-83d82787994c6cf6bf7c982adf0d2518114372464a968544ce10f3dda5769ada WatchSource:0}: Error finding container 83d82787994c6cf6bf7c982adf0d2518114372464a968544ce10f3dda5769ada: Status 404 returned error can't find the container with id 83d82787994c6cf6bf7c982adf0d2518114372464a968544ce10f3dda5769ada Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.327269 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1afa-account-create-update-nl8r4"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.332137 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xw259"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.543727 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-826wp"] Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.939468 4811 generic.go:334] "Generic (PLEG): container finished" podID="4bf21953-4d87-4a23-a09a-454e12365b71" containerID="95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f" exitCode=0 Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.939813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" event={"ID":"4bf21953-4d87-4a23-a09a-454e12365b71","Type":"ContainerDied","Data":"95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f"} Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.939849 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" event={"ID":"4bf21953-4d87-4a23-a09a-454e12365b71","Type":"ContainerStarted","Data":"83d82787994c6cf6bf7c982adf0d2518114372464a968544ce10f3dda5769ada"} Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.946667 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xw259" event={"ID":"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a","Type":"ContainerStarted","Data":"93ac4d3a1a719889246ce0c2033ac120f7e1f67b937f2537ddeac795e2776292"} Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.946717 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xw259" event={"ID":"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a","Type":"ContainerStarted","Data":"3146bdc4375c1e1cab94f02721e64c69baa1bc27f8b1e74fa88f13059e602ffe"} Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.952330 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b4c-account-create-update-q54gf" event={"ID":"4aed10ff-a730-4ac8-88c7-395a71b9554b","Type":"ContainerStarted","Data":"0860ccb55cb4e3e86e372762f333889ff97fa5a8f79dbd6d082586a5f571aaa8"} Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.971143 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7dnxf" event={"ID":"8b6c7641-19e7-4831-82d4-8eda499301b7","Type":"ContainerStarted","Data":"1c8c9ded0fa3c4f14b181f7145434d2cfad7933bf61715d30d1babc04da74195"} Feb 16 21:12:53 crc kubenswrapper[4811]: I0216 21:12:53.983162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-826wp" event={"ID":"6bf24fbe-b1bb-411b-b042-52ec9afefaec","Type":"ContainerStarted","Data":"a5aaca212ca7b3eef80a2dd7556d454ff97f908f7597b97551b944af07e70e9f"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.000705 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmvpb" event={"ID":"15286204-6ffc-4f13-aacb-8c231edf893d","Type":"ContainerStarted","Data":"907b0e27480d39716fa5f5dc2e9c5df4058467e22407359b2784de0802139c93"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.000743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmvpb" event={"ID":"15286204-6ffc-4f13-aacb-8c231edf893d","Type":"ContainerStarted","Data":"5281d66f047cceee164efd5796f1b82d9f4553c94e6354ba00f8a45128e055cc"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.010013 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mstfh" event={"ID":"48b8148b-cf17-4592-8583-edb4ccedca18","Type":"ContainerStarted","Data":"4c980ac24f5fc3d27966e6bbc6d0dd015591904629ee92cb6128a9162992dc2d"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.010049 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mstfh" event={"ID":"48b8148b-cf17-4592-8583-edb4ccedca18","Type":"ContainerStarted","Data":"691c23c4298e9694381caa04113d6f74e8919e9cee0b344e6e563c6ddb65d907"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.022764 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1afa-account-create-update-nl8r4" event={"ID":"f556f9d0-3444-46b3-b435-dcf08cf76c0c","Type":"ContainerStarted","Data":"df662400c2e40b8027531a26680ff814da81b7bf4e1f521bfb6f9f1431f6d680"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.024039 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4b4c-account-create-update-q54gf" podStartSLOduration=4.024024821 podStartE2EDuration="4.024024821s" podCreationTimestamp="2026-02-16 21:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:53.998887048 +0000 UTC m=+991.928182986" watchObservedRunningTime="2026-02-16 21:12:54.024024821 +0000 UTC m=+991.953320759" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.027356 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6610-account-create-update-brzlq" event={"ID":"eebc5893-8007-4da8-8e04-9c54d1a7b57c","Type":"ContainerStarted","Data":"c6703f506da4fd7d33b0d0ca7af956e4969080dd52a4459be99d7b955ba9303a"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.030255 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6610-account-create-update-brzlq" event={"ID":"eebc5893-8007-4da8-8e04-9c54d1a7b57c","Type":"ContainerStarted","Data":"6bf1793a33003c2d93fdfed6272f38a04cbda28d83f3c5b4a8c36c6366b79960"} Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.062419 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-xw259" podStartSLOduration=4.062400306 podStartE2EDuration="4.062400306s" podCreationTimestamp="2026-02-16 21:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:54.020734728 +0000 UTC m=+991.950030666" watchObservedRunningTime="2026-02-16 21:12:54.062400306 +0000 UTC m=+991.991696254" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.080904 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-mstfh" podStartSLOduration=8.08088496 podStartE2EDuration="8.08088496s" podCreationTimestamp="2026-02-16 21:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:54.044001473 +0000 UTC m=+991.973297411" watchObservedRunningTime="2026-02-16 21:12:54.08088496 +0000 UTC m=+992.010180898" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.095935 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-lmvpb" podStartSLOduration=8.095910868 podStartE2EDuration="8.095910868s" podCreationTimestamp="2026-02-16 21:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:54.065513384 +0000 UTC m=+991.994809322" watchObservedRunningTime="2026-02-16 21:12:54.095910868 +0000 UTC m=+992.025206846" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.116083 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6610-account-create-update-brzlq" podStartSLOduration=8.116066085 podStartE2EDuration="8.116066085s" podCreationTimestamp="2026-02-16 21:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:54.082263845 +0000 UTC m=+992.011559793" watchObservedRunningTime="2026-02-16 21:12:54.116066085 +0000 UTC m=+992.045362023" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.126169 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-1afa-account-create-update-nl8r4" podStartSLOduration=8.126152639 podStartE2EDuration="8.126152639s" podCreationTimestamp="2026-02-16 21:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:54.101734175 +0000 UTC m=+992.031030113" watchObservedRunningTime="2026-02-16 21:12:54.126152639 +0000 UTC m=+992.055448577" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.715605 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5c0388-6acf-443c-9db5-486defcdeacd" path="/var/lib/kubelet/pods/6c5c0388-6acf-443c-9db5-486defcdeacd/volumes" Feb 16 21:12:54 crc kubenswrapper[4811]: I0216 21:12:54.718541 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c87cd6ce-730e-4107-8027-71b18ae4a0f7" path="/var/lib/kubelet/pods/c87cd6ce-730e-4107-8027-71b18ae4a0f7/volumes" Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.035553 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" event={"ID":"4bf21953-4d87-4a23-a09a-454e12365b71","Type":"ContainerStarted","Data":"d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.035858 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.039859 4811 generic.go:334] "Generic (PLEG): container finished" podID="48b8148b-cf17-4592-8583-edb4ccedca18" containerID="4c980ac24f5fc3d27966e6bbc6d0dd015591904629ee92cb6128a9162992dc2d" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.039946 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mstfh" event={"ID":"48b8148b-cf17-4592-8583-edb4ccedca18","Type":"ContainerDied","Data":"4c980ac24f5fc3d27966e6bbc6d0dd015591904629ee92cb6128a9162992dc2d"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.043527 4811 generic.go:334] "Generic (PLEG): container finished" podID="6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" containerID="93ac4d3a1a719889246ce0c2033ac120f7e1f67b937f2537ddeac795e2776292" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.043586 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xw259" event={"ID":"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a","Type":"ContainerDied","Data":"93ac4d3a1a719889246ce0c2033ac120f7e1f67b937f2537ddeac795e2776292"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.046654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerStarted","Data":"96c3b6637bc5d3056022600925d1249a04f728a2ee5378fa24e7c38a7ed2164c"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.054054 4811 generic.go:334] "Generic (PLEG): container finished" podID="eebc5893-8007-4da8-8e04-9c54d1a7b57c" containerID="c6703f506da4fd7d33b0d0ca7af956e4969080dd52a4459be99d7b955ba9303a" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.054233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6610-account-create-update-brzlq" event={"ID":"eebc5893-8007-4da8-8e04-9c54d1a7b57c","Type":"ContainerDied","Data":"c6703f506da4fd7d33b0d0ca7af956e4969080dd52a4459be99d7b955ba9303a"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.062535 4811 generic.go:334] "Generic (PLEG): container finished" podID="4aed10ff-a730-4ac8-88c7-395a71b9554b" containerID="a432017193da7461ce95f2529a6311bca58a7a6b5b77768578d5fb55f3c5b094" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.062631 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b4c-account-create-update-q54gf" event={"ID":"4aed10ff-a730-4ac8-88c7-395a71b9554b","Type":"ContainerDied","Data":"a432017193da7461ce95f2529a6311bca58a7a6b5b77768578d5fb55f3c5b094"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.067833 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" podStartSLOduration=8.067815272 podStartE2EDuration="8.067815272s" podCreationTimestamp="2026-02-16 21:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:12:55.060813456 +0000 UTC m=+992.990109404" watchObservedRunningTime="2026-02-16 21:12:55.067815272 +0000 UTC m=+992.997111230" Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.068049 4811 generic.go:334] "Generic (PLEG): container finished" podID="f556f9d0-3444-46b3-b435-dcf08cf76c0c" containerID="ff7537c2fb0ff2776acdbdf93f70c59ac61b4b53f56fd8d6944fc435ac925e5c" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.068156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1afa-account-create-update-nl8r4" event={"ID":"f556f9d0-3444-46b3-b435-dcf08cf76c0c","Type":"ContainerDied","Data":"ff7537c2fb0ff2776acdbdf93f70c59ac61b4b53f56fd8d6944fc435ac925e5c"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.073463 4811 generic.go:334] "Generic (PLEG): container finished" podID="6bf24fbe-b1bb-411b-b042-52ec9afefaec" containerID="1e012c4ca23120a3cf3ac1134d9b440249bbfd71c2eb5c54c03ebb045a776dd0" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.073544 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-826wp" event={"ID":"6bf24fbe-b1bb-411b-b042-52ec9afefaec","Type":"ContainerDied","Data":"1e012c4ca23120a3cf3ac1134d9b440249bbfd71c2eb5c54c03ebb045a776dd0"} Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.088850 4811 generic.go:334] "Generic (PLEG): container finished" podID="15286204-6ffc-4f13-aacb-8c231edf893d" containerID="907b0e27480d39716fa5f5dc2e9c5df4058467e22407359b2784de0802139c93" exitCode=0 Feb 16 21:12:55 crc kubenswrapper[4811]: I0216 21:12:55.088916 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmvpb" event={"ID":"15286204-6ffc-4f13-aacb-8c231edf893d","Type":"ContainerDied","Data":"907b0e27480d39716fa5f5dc2e9c5df4058467e22407359b2784de0802139c93"} Feb 16 21:12:56 crc kubenswrapper[4811]: I0216 21:12:56.257321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:12:56 crc kubenswrapper[4811]: E0216 21:12:56.257586 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:12:56 crc kubenswrapper[4811]: E0216 21:12:56.257605 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:12:56 crc kubenswrapper[4811]: E0216 21:12:56.257654 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift podName:3590443c-c5fd-4eec-a144-06cddd956651 nodeName:}" failed. No retries permitted until 2026-02-16 21:13:04.257637425 +0000 UTC m=+1002.186933363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift") pod "swift-storage-0" (UID: "3590443c-c5fd-4eec-a144-06cddd956651") : configmap "swift-ring-files" not found Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.931135 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.934521 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.957618 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.968995 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.969162 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.978673 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-826wp" Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.993981 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzddt\" (UniqueName: \"kubernetes.io/projected/4aed10ff-a730-4ac8-88c7-395a71b9554b-kube-api-access-wzddt\") pod \"4aed10ff-a730-4ac8-88c7-395a71b9554b\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994067 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdxzk\" (UniqueName: \"kubernetes.io/projected/48b8148b-cf17-4592-8583-edb4ccedca18-kube-api-access-wdxzk\") pod \"48b8148b-cf17-4592-8583-edb4ccedca18\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994100 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rrzx\" (UniqueName: \"kubernetes.io/projected/15286204-6ffc-4f13-aacb-8c231edf893d-kube-api-access-7rrzx\") pod \"15286204-6ffc-4f13-aacb-8c231edf893d\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994118 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aed10ff-a730-4ac8-88c7-395a71b9554b-operator-scripts\") pod \"4aed10ff-a730-4ac8-88c7-395a71b9554b\" (UID: \"4aed10ff-a730-4ac8-88c7-395a71b9554b\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994146 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x48dk\" (UniqueName: \"kubernetes.io/projected/6bf24fbe-b1bb-411b-b042-52ec9afefaec-kube-api-access-x48dk\") pod \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994239 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ffsg\" (UniqueName: \"kubernetes.io/projected/f556f9d0-3444-46b3-b435-dcf08cf76c0c-kube-api-access-2ffsg\") pod \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15286204-6ffc-4f13-aacb-8c231edf893d-operator-scripts\") pod \"15286204-6ffc-4f13-aacb-8c231edf893d\" (UID: \"15286204-6ffc-4f13-aacb-8c231edf893d\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994270 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebc5893-8007-4da8-8e04-9c54d1a7b57c-operator-scripts\") pod \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f556f9d0-3444-46b3-b435-dcf08cf76c0c-operator-scripts\") pod \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\" (UID: \"f556f9d0-3444-46b3-b435-dcf08cf76c0c\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994324 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf24fbe-b1bb-411b-b042-52ec9afefaec-operator-scripts\") pod \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\" (UID: \"6bf24fbe-b1bb-411b-b042-52ec9afefaec\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994388 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b8148b-cf17-4592-8583-edb4ccedca18-operator-scripts\") pod \"48b8148b-cf17-4592-8583-edb4ccedca18\" (UID: \"48b8148b-cf17-4592-8583-edb4ccedca18\") " Feb 16 21:12:57 crc kubenswrapper[4811]: I0216 21:12:57.994458 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26mg2\" (UniqueName: \"kubernetes.io/projected/eebc5893-8007-4da8-8e04-9c54d1a7b57c-kube-api-access-26mg2\") pod \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\" (UID: \"eebc5893-8007-4da8-8e04-9c54d1a7b57c\") " Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.011859 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f556f9d0-3444-46b3-b435-dcf08cf76c0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f556f9d0-3444-46b3-b435-dcf08cf76c0c" (UID: "f556f9d0-3444-46b3-b435-dcf08cf76c0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.012545 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15286204-6ffc-4f13-aacb-8c231edf893d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15286204-6ffc-4f13-aacb-8c231edf893d" (UID: "15286204-6ffc-4f13-aacb-8c231edf893d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.012929 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eebc5893-8007-4da8-8e04-9c54d1a7b57c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eebc5893-8007-4da8-8e04-9c54d1a7b57c" (UID: "eebc5893-8007-4da8-8e04-9c54d1a7b57c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.013665 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aed10ff-a730-4ac8-88c7-395a71b9554b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4aed10ff-a730-4ac8-88c7-395a71b9554b" (UID: "4aed10ff-a730-4ac8-88c7-395a71b9554b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.016860 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f556f9d0-3444-46b3-b435-dcf08cf76c0c-kube-api-access-2ffsg" (OuterVolumeSpecName: "kube-api-access-2ffsg") pod "f556f9d0-3444-46b3-b435-dcf08cf76c0c" (UID: "f556f9d0-3444-46b3-b435-dcf08cf76c0c"). InnerVolumeSpecName "kube-api-access-2ffsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.017722 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eebc5893-8007-4da8-8e04-9c54d1a7b57c-kube-api-access-26mg2" (OuterVolumeSpecName: "kube-api-access-26mg2") pod "eebc5893-8007-4da8-8e04-9c54d1a7b57c" (UID: "eebc5893-8007-4da8-8e04-9c54d1a7b57c"). InnerVolumeSpecName "kube-api-access-26mg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.019892 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf24fbe-b1bb-411b-b042-52ec9afefaec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bf24fbe-b1bb-411b-b042-52ec9afefaec" (UID: "6bf24fbe-b1bb-411b-b042-52ec9afefaec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.021227 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b8148b-cf17-4592-8583-edb4ccedca18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48b8148b-cf17-4592-8583-edb4ccedca18" (UID: "48b8148b-cf17-4592-8583-edb4ccedca18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.024444 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf24fbe-b1bb-411b-b042-52ec9afefaec-kube-api-access-x48dk" (OuterVolumeSpecName: "kube-api-access-x48dk") pod "6bf24fbe-b1bb-411b-b042-52ec9afefaec" (UID: "6bf24fbe-b1bb-411b-b042-52ec9afefaec"). InnerVolumeSpecName "kube-api-access-x48dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.030037 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aed10ff-a730-4ac8-88c7-395a71b9554b-kube-api-access-wzddt" (OuterVolumeSpecName: "kube-api-access-wzddt") pod "4aed10ff-a730-4ac8-88c7-395a71b9554b" (UID: "4aed10ff-a730-4ac8-88c7-395a71b9554b"). InnerVolumeSpecName "kube-api-access-wzddt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.043575 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b8148b-cf17-4592-8583-edb4ccedca18-kube-api-access-wdxzk" (OuterVolumeSpecName: "kube-api-access-wdxzk") pod "48b8148b-cf17-4592-8583-edb4ccedca18" (UID: "48b8148b-cf17-4592-8583-edb4ccedca18"). InnerVolumeSpecName "kube-api-access-wdxzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.045418 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15286204-6ffc-4f13-aacb-8c231edf893d-kube-api-access-7rrzx" (OuterVolumeSpecName: "kube-api-access-7rrzx") pod "15286204-6ffc-4f13-aacb-8c231edf893d" (UID: "15286204-6ffc-4f13-aacb-8c231edf893d"). InnerVolumeSpecName "kube-api-access-7rrzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.085303 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xw259" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095496 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dslq2\" (UniqueName: \"kubernetes.io/projected/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-kube-api-access-dslq2\") pod \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095546 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-operator-scripts\") pod \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\" (UID: \"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a\") " Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095861 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ffsg\" (UniqueName: \"kubernetes.io/projected/f556f9d0-3444-46b3-b435-dcf08cf76c0c-kube-api-access-2ffsg\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095872 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15286204-6ffc-4f13-aacb-8c231edf893d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095882 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eebc5893-8007-4da8-8e04-9c54d1a7b57c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095891 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f556f9d0-3444-46b3-b435-dcf08cf76c0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095899 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf24fbe-b1bb-411b-b042-52ec9afefaec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095908 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b8148b-cf17-4592-8583-edb4ccedca18-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095916 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26mg2\" (UniqueName: \"kubernetes.io/projected/eebc5893-8007-4da8-8e04-9c54d1a7b57c-kube-api-access-26mg2\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095926 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzddt\" (UniqueName: \"kubernetes.io/projected/4aed10ff-a730-4ac8-88c7-395a71b9554b-kube-api-access-wzddt\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095934 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdxzk\" (UniqueName: \"kubernetes.io/projected/48b8148b-cf17-4592-8583-edb4ccedca18-kube-api-access-wdxzk\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095942 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrzx\" (UniqueName: \"kubernetes.io/projected/15286204-6ffc-4f13-aacb-8c231edf893d-kube-api-access-7rrzx\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095952 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aed10ff-a730-4ac8-88c7-395a71b9554b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.095971 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x48dk\" (UniqueName: \"kubernetes.io/projected/6bf24fbe-b1bb-411b-b042-52ec9afefaec-kube-api-access-x48dk\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.107676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-kube-api-access-dslq2" (OuterVolumeSpecName: "kube-api-access-dslq2") pod "6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" (UID: "6922a5b7-d2e7-489e-b42d-1a54a1d85b6a"). InnerVolumeSpecName "kube-api-access-dslq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.111597 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" (UID: "6922a5b7-d2e7-489e-b42d-1a54a1d85b6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.162599 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-826wp" event={"ID":"6bf24fbe-b1bb-411b-b042-52ec9afefaec","Type":"ContainerDied","Data":"a5aaca212ca7b3eef80a2dd7556d454ff97f908f7597b97551b944af07e70e9f"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.162637 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5aaca212ca7b3eef80a2dd7556d454ff97f908f7597b97551b944af07e70e9f" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.162765 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-826wp" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.171915 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmvpb" event={"ID":"15286204-6ffc-4f13-aacb-8c231edf893d","Type":"ContainerDied","Data":"5281d66f047cceee164efd5796f1b82d9f4553c94e6354ba00f8a45128e055cc"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.171950 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5281d66f047cceee164efd5796f1b82d9f4553c94e6354ba00f8a45128e055cc" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.172011 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmvpb" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.182472 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mstfh" event={"ID":"48b8148b-cf17-4592-8583-edb4ccedca18","Type":"ContainerDied","Data":"691c23c4298e9694381caa04113d6f74e8919e9cee0b344e6e563c6ddb65d907"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.182507 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691c23c4298e9694381caa04113d6f74e8919e9cee0b344e6e563c6ddb65d907" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.182478 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mstfh" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.193447 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xw259" event={"ID":"6922a5b7-d2e7-489e-b42d-1a54a1d85b6a","Type":"ContainerDied","Data":"3146bdc4375c1e1cab94f02721e64c69baa1bc27f8b1e74fa88f13059e602ffe"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.193485 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3146bdc4375c1e1cab94f02721e64c69baa1bc27f8b1e74fa88f13059e602ffe" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.193531 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xw259" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.195858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1afa-account-create-update-nl8r4" event={"ID":"f556f9d0-3444-46b3-b435-dcf08cf76c0c","Type":"ContainerDied","Data":"df662400c2e40b8027531a26680ff814da81b7bf4e1f521bfb6f9f1431f6d680"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.195894 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df662400c2e40b8027531a26680ff814da81b7bf4e1f521bfb6f9f1431f6d680" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.195948 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1afa-account-create-update-nl8r4" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.196987 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dslq2\" (UniqueName: \"kubernetes.io/projected/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-kube-api-access-dslq2\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.197015 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.198062 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6610-account-create-update-brzlq" event={"ID":"eebc5893-8007-4da8-8e04-9c54d1a7b57c","Type":"ContainerDied","Data":"6bf1793a33003c2d93fdfed6272f38a04cbda28d83f3c5b4a8c36c6366b79960"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.198082 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6610-account-create-update-brzlq" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.198092 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf1793a33003c2d93fdfed6272f38a04cbda28d83f3c5b4a8c36c6366b79960" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.199528 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b4c-account-create-update-q54gf" event={"ID":"4aed10ff-a730-4ac8-88c7-395a71b9554b","Type":"ContainerDied","Data":"0860ccb55cb4e3e86e372762f333889ff97fa5a8f79dbd6d082586a5f571aaa8"} Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.199547 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0860ccb55cb4e3e86e372762f333889ff97fa5a8f79dbd6d082586a5f571aaa8" Feb 16 21:12:58 crc kubenswrapper[4811]: I0216 21:12:58.199661 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b4c-account-create-update-q54gf" Feb 16 21:12:59 crc kubenswrapper[4811]: I0216 21:12:59.214542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7dnxf" event={"ID":"8b6c7641-19e7-4831-82d4-8eda499301b7","Type":"ContainerStarted","Data":"c2a5443e727bdc5e45f338c7da8f12c64e1a2394687bb7e7b5e35a09d39f1691"} Feb 16 21:12:59 crc kubenswrapper[4811]: I0216 21:12:59.232928 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7dnxf" podStartSLOduration=6.539128225 podStartE2EDuration="11.232909353s" podCreationTimestamp="2026-02-16 21:12:48 +0000 UTC" firstStartedPulling="2026-02-16 21:12:53.065406591 +0000 UTC m=+990.994702529" lastFinishedPulling="2026-02-16 21:12:57.759187719 +0000 UTC m=+995.688483657" observedRunningTime="2026-02-16 21:12:59.230840241 +0000 UTC m=+997.160136179" watchObservedRunningTime="2026-02-16 21:12:59.232909353 +0000 UTC m=+997.162205291" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.234415 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.721675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fx82t"] Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722057 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b8148b-cf17-4592-8583-edb4ccedca18" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722075 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b8148b-cf17-4592-8583-edb4ccedca18" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722099 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722105 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722113 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf24fbe-b1bb-411b-b042-52ec9afefaec" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722121 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf24fbe-b1bb-411b-b042-52ec9afefaec" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722128 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15286204-6ffc-4f13-aacb-8c231edf893d" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722134 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="15286204-6ffc-4f13-aacb-8c231edf893d" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722146 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aed10ff-a730-4ac8-88c7-395a71b9554b" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722152 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aed10ff-a730-4ac8-88c7-395a71b9554b" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722162 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f556f9d0-3444-46b3-b435-dcf08cf76c0c" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722167 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f556f9d0-3444-46b3-b435-dcf08cf76c0c" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: E0216 21:13:00.722176 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebc5893-8007-4da8-8e04-9c54d1a7b57c" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722181 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebc5893-8007-4da8-8e04-9c54d1a7b57c" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722355 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f556f9d0-3444-46b3-b435-dcf08cf76c0c" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722379 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722392 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b8148b-cf17-4592-8583-edb4ccedca18" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722408 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aed10ff-a730-4ac8-88c7-395a71b9554b" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722420 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf24fbe-b1bb-411b-b042-52ec9afefaec" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722430 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="15286204-6ffc-4f13-aacb-8c231edf893d" containerName="mariadb-database-create" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.722440 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebc5893-8007-4da8-8e04-9c54d1a7b57c" containerName="mariadb-account-create-update" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.723040 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.725385 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lkdd5" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.725553 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.732001 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fx82t"] Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.825224 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="5f050753-85f4-413e-92b6-0503db5e7391" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.846904 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw84l\" (UniqueName: \"kubernetes.io/projected/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-kube-api-access-xw84l\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.846962 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-config-data\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.846992 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-combined-ca-bundle\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.847743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-db-sync-config-data\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.949447 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw84l\" (UniqueName: \"kubernetes.io/projected/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-kube-api-access-xw84l\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.949497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-config-data\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.949531 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-combined-ca-bundle\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.949687 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-db-sync-config-data\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.955545 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-db-sync-config-data\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.960673 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-config-data\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.965179 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-combined-ca-bundle\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:00 crc kubenswrapper[4811]: I0216 21:13:00.965669 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw84l\" (UniqueName: \"kubernetes.io/projected/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-kube-api-access-xw84l\") pod \"glance-db-sync-fx82t\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:01 crc kubenswrapper[4811]: I0216 21:13:01.039473 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:01 crc kubenswrapper[4811]: I0216 21:13:01.240364 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerStarted","Data":"68a1d2ba818f8b0f33b8d4c4e14b581a3bd432fbb1a28a786277d1a475f460f2"} Feb 16 21:13:01 crc kubenswrapper[4811]: I0216 21:13:01.274478 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.143819905 podStartE2EDuration="1m4.274457787s" podCreationTimestamp="2026-02-16 21:11:57 +0000 UTC" firstStartedPulling="2026-02-16 21:12:15.116428526 +0000 UTC m=+953.045724464" lastFinishedPulling="2026-02-16 21:13:00.247066408 +0000 UTC m=+998.176362346" observedRunningTime="2026-02-16 21:13:01.263507961 +0000 UTC m=+999.192803909" watchObservedRunningTime="2026-02-16 21:13:01.274457787 +0000 UTC m=+999.203753745" Feb 16 21:13:02 crc kubenswrapper[4811]: W0216 21:13:02.058237 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a2e3a6d_e105_43a4_bdae_9ef2bde0f137.slice/crio-b460f63667a3a1343b952d2cae9f867e1009ea7d8827a497e081099b7d0cc441 WatchSource:0}: Error finding container b460f63667a3a1343b952d2cae9f867e1009ea7d8827a497e081099b7d0cc441: Status 404 returned error can't find the container with id b460f63667a3a1343b952d2cae9f867e1009ea7d8827a497e081099b7d0cc441 Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.059810 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fx82t"] Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.256365 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx82t" event={"ID":"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137","Type":"ContainerStarted","Data":"b460f63667a3a1343b952d2cae9f867e1009ea7d8827a497e081099b7d0cc441"} Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.598417 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.668356 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.743455 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vvgkf"] Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.743749 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vvgkf" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" containerName="dnsmasq-dns" containerID="cri-o://702d9069293a76dee0b0b722e45dfda0ff2f646ff4908031403c179a4eb1b4a2" gracePeriod=10 Feb 16 21:13:02 crc kubenswrapper[4811]: I0216 21:13:02.869417 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.193117 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-52hns"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.194361 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.220072 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-52hns"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.243392 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-2f7e-account-create-update-ssxfp"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.245518 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.249109 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.309323 4811 generic.go:334] "Generic (PLEG): container finished" podID="5713e95b-f062-47be-8f12-aaa23215b31a" containerID="702d9069293a76dee0b0b722e45dfda0ff2f646ff4908031403c179a4eb1b4a2" exitCode=0 Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.309626 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vvgkf" event={"ID":"5713e95b-f062-47be-8f12-aaa23215b31a","Type":"ContainerDied","Data":"702d9069293a76dee0b0b722e45dfda0ff2f646ff4908031403c179a4eb1b4a2"} Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.313787 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn799\" (UniqueName: \"kubernetes.io/projected/fe5848b7-b291-4c54-a226-dfd4eedbea37-kube-api-access-fn799\") pod \"cloudkitty-db-create-52hns\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.314108 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5848b7-b291-4c54-a226-dfd4eedbea37-operator-scripts\") pod \"cloudkitty-db-create-52hns\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.321967 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-2f7e-account-create-update-ssxfp"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.351688 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-g4gc5"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.353019 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.363816 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g4gc5"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.416284 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-operator-scripts\") pod \"cloudkitty-2f7e-account-create-update-ssxfp\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.416500 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn799\" (UniqueName: \"kubernetes.io/projected/fe5848b7-b291-4c54-a226-dfd4eedbea37-kube-api-access-fn799\") pod \"cloudkitty-db-create-52hns\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.416587 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxjw\" (UniqueName: \"kubernetes.io/projected/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-kube-api-access-xpxjw\") pod \"cloudkitty-2f7e-account-create-update-ssxfp\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.416717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5848b7-b291-4c54-a226-dfd4eedbea37-operator-scripts\") pod \"cloudkitty-db-create-52hns\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.417849 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5848b7-b291-4c54-a226-dfd4eedbea37-operator-scripts\") pod \"cloudkitty-db-create-52hns\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.461161 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn799\" (UniqueName: \"kubernetes.io/projected/fe5848b7-b291-4c54-a226-dfd4eedbea37-kube-api-access-fn799\") pod \"cloudkitty-db-create-52hns\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.496139 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rmfvr"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.497567 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.501420 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.502989 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rmfvr"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.506414 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.506646 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.506802 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s2qbh" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.517987 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ac65-account-create-update-cg9vg"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.518949 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6ll\" (UniqueName: \"kubernetes.io/projected/313d0e82-09f0-4085-ac8b-9eafe564b8ec-kube-api-access-kz6ll\") pod \"cinder-db-create-g4gc5\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.519008 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-operator-scripts\") pod \"cloudkitty-2f7e-account-create-update-ssxfp\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.519075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxjw\" (UniqueName: \"kubernetes.io/projected/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-kube-api-access-xpxjw\") pod \"cloudkitty-2f7e-account-create-update-ssxfp\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.519116 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313d0e82-09f0-4085-ac8b-9eafe564b8ec-operator-scripts\") pod \"cinder-db-create-g4gc5\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.519138 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.519807 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-operator-scripts\") pod \"cloudkitty-2f7e-account-create-update-ssxfp\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.523862 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.528659 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.544718 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ac65-account-create-update-cg9vg"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.584853 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxjw\" (UniqueName: \"kubernetes.io/projected/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-kube-api-access-xpxjw\") pod \"cloudkitty-2f7e-account-create-update-ssxfp\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.586101 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.598263 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.643173 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6ll\" (UniqueName: \"kubernetes.io/projected/313d0e82-09f0-4085-ac8b-9eafe564b8ec-kube-api-access-kz6ll\") pod \"cinder-db-create-g4gc5\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.643607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jhxl\" (UniqueName: \"kubernetes.io/projected/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-kube-api-access-4jhxl\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.647342 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-config-data\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.647408 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-combined-ca-bundle\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.647684 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313d0e82-09f0-4085-ac8b-9eafe564b8ec-operator-scripts\") pod \"cinder-db-create-g4gc5\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.647717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3e590c-550e-4dbc-a82b-8e81ac468062-operator-scripts\") pod \"cinder-ac65-account-create-update-cg9vg\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.647850 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlnq\" (UniqueName: \"kubernetes.io/projected/fd3e590c-550e-4dbc-a82b-8e81ac468062-kube-api-access-rmlnq\") pod \"cinder-ac65-account-create-update-cg9vg\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.650496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313d0e82-09f0-4085-ac8b-9eafe564b8ec-operator-scripts\") pod \"cinder-db-create-g4gc5\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.659340 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.690847 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6ll\" (UniqueName: \"kubernetes.io/projected/313d0e82-09f0-4085-ac8b-9eafe564b8ec-kube-api-access-kz6ll\") pod \"cinder-db-create-g4gc5\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.695439 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.712523 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-n7dp7"] Feb 16 21:13:03 crc kubenswrapper[4811]: E0216 21:13:03.714864 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" containerName="init" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.714887 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" containerName="init" Feb 16 21:13:03 crc kubenswrapper[4811]: E0216 21:13:03.714936 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" containerName="dnsmasq-dns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.714942 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" containerName="dnsmasq-dns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.715130 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" containerName="dnsmasq-dns" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.715878 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.727331 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7732-account-create-update-xfp4c"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.729465 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.733266 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.741322 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n7dp7"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.747854 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7732-account-create-update-xfp4c"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.751503 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-sb\") pod \"5713e95b-f062-47be-8f12-aaa23215b31a\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.751559 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-dns-svc\") pod \"5713e95b-f062-47be-8f12-aaa23215b31a\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.751761 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-nb\") pod \"5713e95b-f062-47be-8f12-aaa23215b31a\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.751861 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6747\" (UniqueName: \"kubernetes.io/projected/5713e95b-f062-47be-8f12-aaa23215b31a-kube-api-access-f6747\") pod \"5713e95b-f062-47be-8f12-aaa23215b31a\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.751898 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-config\") pod \"5713e95b-f062-47be-8f12-aaa23215b31a\" (UID: \"5713e95b-f062-47be-8f12-aaa23215b31a\") " Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.752309 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jhxl\" (UniqueName: \"kubernetes.io/projected/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-kube-api-access-4jhxl\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.752371 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-config-data\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.752395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-combined-ca-bundle\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.752499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3e590c-550e-4dbc-a82b-8e81ac468062-operator-scripts\") pod \"cinder-ac65-account-create-update-cg9vg\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.752525 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmlnq\" (UniqueName: \"kubernetes.io/projected/fd3e590c-550e-4dbc-a82b-8e81ac468062-kube-api-access-rmlnq\") pod \"cinder-ac65-account-create-update-cg9vg\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.757495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3e590c-550e-4dbc-a82b-8e81ac468062-operator-scripts\") pod \"cinder-ac65-account-create-update-cg9vg\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.768872 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-config-data\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.769550 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5713e95b-f062-47be-8f12-aaa23215b31a-kube-api-access-f6747" (OuterVolumeSpecName: "kube-api-access-f6747") pod "5713e95b-f062-47be-8f12-aaa23215b31a" (UID: "5713e95b-f062-47be-8f12-aaa23215b31a"). InnerVolumeSpecName "kube-api-access-f6747". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.769583 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-combined-ca-bundle\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.788772 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmlnq\" (UniqueName: \"kubernetes.io/projected/fd3e590c-550e-4dbc-a82b-8e81ac468062-kube-api-access-rmlnq\") pod \"cinder-ac65-account-create-update-cg9vg\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.802978 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vx8vr"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.804370 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.819665 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vx8vr"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.820938 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jhxl\" (UniqueName: \"kubernetes.io/projected/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-kube-api-access-4jhxl\") pod \"keystone-db-sync-rmfvr\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.829453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-config" (OuterVolumeSpecName: "config") pod "5713e95b-f062-47be-8f12-aaa23215b31a" (UID: "5713e95b-f062-47be-8f12-aaa23215b31a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.850850 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5713e95b-f062-47be-8f12-aaa23215b31a" (UID: "5713e95b-f062-47be-8f12-aaa23215b31a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.854538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdpv\" (UniqueName: \"kubernetes.io/projected/609199ec-a876-41fd-835a-826bb246817d-kube-api-access-2jdpv\") pod \"barbican-7732-account-create-update-xfp4c\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.854882 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d61096-bb5d-43e1-ba73-1829b343aec7-operator-scripts\") pod \"barbican-db-create-n7dp7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.857769 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgjd\" (UniqueName: \"kubernetes.io/projected/70d61096-bb5d-43e1-ba73-1829b343aec7-kube-api-access-rqgjd\") pod \"barbican-db-create-n7dp7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.857808 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/609199ec-a876-41fd-835a-826bb246817d-operator-scripts\") pod \"barbican-7732-account-create-update-xfp4c\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.857901 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.857915 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6747\" (UniqueName: \"kubernetes.io/projected/5713e95b-f062-47be-8f12-aaa23215b31a-kube-api-access-f6747\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.857926 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.864389 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0e26-account-create-update-28c89"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.866257 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.871672 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.879872 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0e26-account-create-update-28c89"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.882016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5713e95b-f062-47be-8f12-aaa23215b31a" (UID: "5713e95b-f062-47be-8f12-aaa23215b31a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.894881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5713e95b-f062-47be-8f12-aaa23215b31a" (UID: "5713e95b-f062-47be-8f12-aaa23215b31a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.915537 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-826wp"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.922104 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-826wp"] Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-operator-scripts\") pod \"neutron-0e26-account-create-update-28c89\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964274 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tpb\" (UniqueName: \"kubernetes.io/projected/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-kube-api-access-29tpb\") pod \"neutron-db-create-vx8vr\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964322 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d61096-bb5d-43e1-ba73-1829b343aec7-operator-scripts\") pod \"barbican-db-create-n7dp7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964368 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-operator-scripts\") pod \"neutron-db-create-vx8vr\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964451 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgjd\" (UniqueName: \"kubernetes.io/projected/70d61096-bb5d-43e1-ba73-1829b343aec7-kube-api-access-rqgjd\") pod \"barbican-db-create-n7dp7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964486 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/609199ec-a876-41fd-835a-826bb246817d-operator-scripts\") pod \"barbican-7732-account-create-update-xfp4c\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964528 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jdpv\" (UniqueName: \"kubernetes.io/projected/609199ec-a876-41fd-835a-826bb246817d-kube-api-access-2jdpv\") pod \"barbican-7732-account-create-update-xfp4c\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rsj\" (UniqueName: \"kubernetes.io/projected/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-kube-api-access-w2rsj\") pod \"neutron-0e26-account-create-update-28c89\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964644 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.964662 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5713e95b-f062-47be-8f12-aaa23215b31a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.966436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d61096-bb5d-43e1-ba73-1829b343aec7-operator-scripts\") pod \"barbican-db-create-n7dp7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.977738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/609199ec-a876-41fd-835a-826bb246817d-operator-scripts\") pod \"barbican-7732-account-create-update-xfp4c\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.984729 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jdpv\" (UniqueName: \"kubernetes.io/projected/609199ec-a876-41fd-835a-826bb246817d-kube-api-access-2jdpv\") pod \"barbican-7732-account-create-update-xfp4c\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:03 crc kubenswrapper[4811]: I0216 21:13:03.987838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgjd\" (UniqueName: \"kubernetes.io/projected/70d61096-bb5d-43e1-ba73-1829b343aec7-kube-api-access-rqgjd\") pod \"barbican-db-create-n7dp7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.028166 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.041867 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.056788 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.067500 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rsj\" (UniqueName: \"kubernetes.io/projected/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-kube-api-access-w2rsj\") pod \"neutron-0e26-account-create-update-28c89\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.067964 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-operator-scripts\") pod \"neutron-0e26-account-create-update-28c89\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.068023 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tpb\" (UniqueName: \"kubernetes.io/projected/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-kube-api-access-29tpb\") pod \"neutron-db-create-vx8vr\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.068112 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-operator-scripts\") pod \"neutron-db-create-vx8vr\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.068948 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-operator-scripts\") pod \"neutron-db-create-vx8vr\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.069920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-operator-scripts\") pod \"neutron-0e26-account-create-update-28c89\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.084971 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rsj\" (UniqueName: \"kubernetes.io/projected/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-kube-api-access-w2rsj\") pod \"neutron-0e26-account-create-update-28c89\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.086499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tpb\" (UniqueName: \"kubernetes.io/projected/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-kube-api-access-29tpb\") pod \"neutron-db-create-vx8vr\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.124468 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.135704 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.219929 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.272728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:13:04 crc kubenswrapper[4811]: E0216 21:13:04.272951 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:13:04 crc kubenswrapper[4811]: E0216 21:13:04.272973 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:13:04 crc kubenswrapper[4811]: E0216 21:13:04.273036 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift podName:3590443c-c5fd-4eec-a144-06cddd956651 nodeName:}" failed. No retries permitted until 2026-02-16 21:13:20.273008369 +0000 UTC m=+1018.202304307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift") pod "swift-storage-0" (UID: "3590443c-c5fd-4eec-a144-06cddd956651") : configmap "swift-ring-files" not found Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.279310 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-52hns"] Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.355911 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-52hns" event={"ID":"fe5848b7-b291-4c54-a226-dfd4eedbea37","Type":"ContainerStarted","Data":"c506995b7506040f6fb2b5e46eb7239c8fb6a7cdf99745c83fd6cb80316d9051"} Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.373268 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vvgkf" event={"ID":"5713e95b-f062-47be-8f12-aaa23215b31a","Type":"ContainerDied","Data":"bf093d318241e5440ded74b94f5e0a91419bd7a6d6cca003c934cfbe2e9a5a1f"} Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.373325 4811 scope.go:117] "RemoveContainer" containerID="702d9069293a76dee0b0b722e45dfda0ff2f646ff4908031403c179a4eb1b4a2" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.373496 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vvgkf" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.397142 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-2f7e-account-create-update-ssxfp"] Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.438621 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g4gc5"] Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.450373 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vvgkf"] Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.456447 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vvgkf"] Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.473012 4811 scope.go:117] "RemoveContainer" containerID="7c84ca9cd626fd20df89c005af3ffb7c2dab114d5e7edc6ffe05dbf05c4771b1" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.724338 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5713e95b-f062-47be-8f12-aaa23215b31a" path="/var/lib/kubelet/pods/5713e95b-f062-47be-8f12-aaa23215b31a/volumes" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.725765 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf24fbe-b1bb-411b-b042-52ec9afefaec" path="/var/lib/kubelet/pods/6bf24fbe-b1bb-411b-b042-52ec9afefaec/volumes" Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.918805 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vx8vr"] Feb 16 21:13:04 crc kubenswrapper[4811]: I0216 21:13:04.978566 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ac65-account-create-update-cg9vg"] Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.017043 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n7dp7"] Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.038003 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rmfvr"] Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.091517 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7732-account-create-update-xfp4c"] Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.158884 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0e26-account-create-update-28c89"] Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.391558 4811 generic.go:334] "Generic (PLEG): container finished" podID="313d0e82-09f0-4085-ac8b-9eafe564b8ec" containerID="44fd0337aaec1b0c5c944fa1876256e9981620e22dd378fae48706e25ee6f514" exitCode=0 Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.391635 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g4gc5" event={"ID":"313d0e82-09f0-4085-ac8b-9eafe564b8ec","Type":"ContainerDied","Data":"44fd0337aaec1b0c5c944fa1876256e9981620e22dd378fae48706e25ee6f514"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.391660 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g4gc5" event={"ID":"313d0e82-09f0-4085-ac8b-9eafe564b8ec","Type":"ContainerStarted","Data":"4192f58e566351f46d50270c20a18726e8d4ba7e053ffde672efea7323973826"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.407173 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0e26-account-create-update-28c89" event={"ID":"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8","Type":"ContainerStarted","Data":"d572c42c2e89c047a9b735d89cc62aca024042b9f2b491dc3bd7ae5d6ea69763"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.420738 4811 generic.go:334] "Generic (PLEG): container finished" podID="fe5848b7-b291-4c54-a226-dfd4eedbea37" containerID="a9f35813ce7830f429d370e997e41a109acd2dd7b168756ccd8fb8332c1b7f18" exitCode=0 Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.420807 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-52hns" event={"ID":"fe5848b7-b291-4c54-a226-dfd4eedbea37","Type":"ContainerDied","Data":"a9f35813ce7830f429d370e997e41a109acd2dd7b168756ccd8fb8332c1b7f18"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.423406 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vx8vr" event={"ID":"6908fe5f-6f5a-4425-96fe-1b5d0998c02c","Type":"ContainerStarted","Data":"4e86ebccf5a7e4996125c2e1ea7759a8c95739bbb91d5e29eb8af75c958413fb"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.423431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vx8vr" event={"ID":"6908fe5f-6f5a-4425-96fe-1b5d0998c02c","Type":"ContainerStarted","Data":"0eb9582c1589389c2e1bae39528257c6d876a23d82931ee25955986344d12062"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.432620 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7732-account-create-update-xfp4c" event={"ID":"609199ec-a876-41fd-835a-826bb246817d","Type":"ContainerStarted","Data":"5d38df271f56d7bff8509bc77ccf5fa98f50c3897ae457590c852ede9b74a42e"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.441382 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7dp7" event={"ID":"70d61096-bb5d-43e1-ba73-1829b343aec7","Type":"ContainerStarted","Data":"cd4e074696862cbc4687603627e901236da328969c21dfa225478a02be826b46"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.441427 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7dp7" event={"ID":"70d61096-bb5d-43e1-ba73-1829b343aec7","Type":"ContainerStarted","Data":"73af0372d13ce758d1c8ea7e9d671a68b476eb8c6ac19015976f948508bb1f9d"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.456396 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ac65-account-create-update-cg9vg" event={"ID":"fd3e590c-550e-4dbc-a82b-8e81ac468062","Type":"ContainerStarted","Data":"43a15cbb8390d003fe6d21d2d97549fee71f5c8627b7531f4ecee18f16b044bc"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.467330 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-vx8vr" podStartSLOduration=2.467314996 podStartE2EDuration="2.467314996s" podCreationTimestamp="2026-02-16 21:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:05.4602993 +0000 UTC m=+1003.389595238" watchObservedRunningTime="2026-02-16 21:13:05.467314996 +0000 UTC m=+1003.396610934" Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.467390 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmfvr" event={"ID":"44ff615c-b0ce-42f1-b01a-7a59d64dacc1","Type":"ContainerStarted","Data":"36d348b68b73abba028ecab1ba3f4e6f87755fec0d31cea7ab6bf5207a359e1e"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.469337 4811 generic.go:334] "Generic (PLEG): container finished" podID="706cf667-a9da-4c0b-b0c2-8938db9f1b8c" containerID="e8d738ae84353f29467794a0dc974dc64d81fd85c3ae7ded93fdf8da7ac6935a" exitCode=0 Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.469370 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" event={"ID":"706cf667-a9da-4c0b-b0c2-8938db9f1b8c","Type":"ContainerDied","Data":"e8d738ae84353f29467794a0dc974dc64d81fd85c3ae7ded93fdf8da7ac6935a"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.469386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" event={"ID":"706cf667-a9da-4c0b-b0c2-8938db9f1b8c","Type":"ContainerStarted","Data":"c90a692873bd2cda514ca7596e3999ea4ca12af530ab16ce832cbe1514a9bad5"} Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.480863 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-n7dp7" podStartSLOduration=2.480845326 podStartE2EDuration="2.480845326s" podCreationTimestamp="2026-02-16 21:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:05.478806605 +0000 UTC m=+1003.408102543" watchObservedRunningTime="2026-02-16 21:13:05.480845326 +0000 UTC m=+1003.410141264" Feb 16 21:13:05 crc kubenswrapper[4811]: I0216 21:13:05.502180 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ac65-account-create-update-cg9vg" podStartSLOduration=2.502160952 podStartE2EDuration="2.502160952s" podCreationTimestamp="2026-02-16 21:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:05.49651701 +0000 UTC m=+1003.425812948" watchObservedRunningTime="2026-02-16 21:13:05.502160952 +0000 UTC m=+1003.431456890" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.419914 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qhsfb" podUID="b8edc00a-d032-460b-9e97-d784b4fdfe5c" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:13:06 crc kubenswrapper[4811]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:13:06 crc kubenswrapper[4811]: > Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.455925 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.479833 4811 generic.go:334] "Generic (PLEG): container finished" podID="62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" containerID="bbfc63d03b97e0472b63b5e56415118c80696561470e41c04fa1dd9ad0a4da19" exitCode=0 Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.479938 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0e26-account-create-update-28c89" event={"ID":"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8","Type":"ContainerDied","Data":"bbfc63d03b97e0472b63b5e56415118c80696561470e41c04fa1dd9ad0a4da19"} Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.481598 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fktqj" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.485184 4811 generic.go:334] "Generic (PLEG): container finished" podID="8b6c7641-19e7-4831-82d4-8eda499301b7" containerID="c2a5443e727bdc5e45f338c7da8f12c64e1a2394687bb7e7b5e35a09d39f1691" exitCode=0 Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.485267 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7dnxf" event={"ID":"8b6c7641-19e7-4831-82d4-8eda499301b7","Type":"ContainerDied","Data":"c2a5443e727bdc5e45f338c7da8f12c64e1a2394687bb7e7b5e35a09d39f1691"} Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.487155 4811 generic.go:334] "Generic (PLEG): container finished" podID="fd3e590c-550e-4dbc-a82b-8e81ac468062" containerID="5c4a9a194033c09945c90447170d839d83636f1a1a0811f1b2e47bfbb34bc1b4" exitCode=0 Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.487267 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ac65-account-create-update-cg9vg" event={"ID":"fd3e590c-550e-4dbc-a82b-8e81ac468062","Type":"ContainerDied","Data":"5c4a9a194033c09945c90447170d839d83636f1a1a0811f1b2e47bfbb34bc1b4"} Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.489231 4811 generic.go:334] "Generic (PLEG): container finished" podID="6908fe5f-6f5a-4425-96fe-1b5d0998c02c" containerID="4e86ebccf5a7e4996125c2e1ea7759a8c95739bbb91d5e29eb8af75c958413fb" exitCode=0 Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.489278 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vx8vr" event={"ID":"6908fe5f-6f5a-4425-96fe-1b5d0998c02c","Type":"ContainerDied","Data":"4e86ebccf5a7e4996125c2e1ea7759a8c95739bbb91d5e29eb8af75c958413fb"} Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.491178 4811 generic.go:334] "Generic (PLEG): container finished" podID="609199ec-a876-41fd-835a-826bb246817d" containerID="a4d28d60141c5334a374a161f6cba467bce06b694fb65ecd9709368fc11b1fbe" exitCode=0 Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.491243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7732-account-create-update-xfp4c" event={"ID":"609199ec-a876-41fd-835a-826bb246817d","Type":"ContainerDied","Data":"a4d28d60141c5334a374a161f6cba467bce06b694fb65ecd9709368fc11b1fbe"} Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.493305 4811 generic.go:334] "Generic (PLEG): container finished" podID="70d61096-bb5d-43e1-ba73-1829b343aec7" containerID="cd4e074696862cbc4687603627e901236da328969c21dfa225478a02be826b46" exitCode=0 Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.493501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7dp7" event={"ID":"70d61096-bb5d-43e1-ba73-1829b343aec7","Type":"ContainerDied","Data":"cd4e074696862cbc4687603627e901236da328969c21dfa225478a02be826b46"} Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.803960 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qhsfb-config-jrmsw"] Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.805212 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.811423 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.822209 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qhsfb-config-jrmsw"] Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.931511 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.931589 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-log-ovn\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.931641 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-additional-scripts\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.931712 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqcq\" (UniqueName: \"kubernetes.io/projected/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-kube-api-access-vqqcq\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.931749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-scripts\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:06 crc kubenswrapper[4811]: I0216 21:13:06.931807 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run-ovn\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.023252 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.033064 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-log-ovn\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.035939 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-additional-scripts\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.036167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqcq\" (UniqueName: \"kubernetes.io/projected/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-kube-api-access-vqqcq\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.036286 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-scripts\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.036421 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run-ovn\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.036486 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.036831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.036890 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-log-ovn\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.037580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-additional-scripts\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.038248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run-ovn\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.040242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-scripts\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.055336 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqcq\" (UniqueName: \"kubernetes.io/projected/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-kube-api-access-vqqcq\") pod \"ovn-controller-qhsfb-config-jrmsw\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.136713 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.138316 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz6ll\" (UniqueName: \"kubernetes.io/projected/313d0e82-09f0-4085-ac8b-9eafe564b8ec-kube-api-access-kz6ll\") pod \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.138542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313d0e82-09f0-4085-ac8b-9eafe564b8ec-operator-scripts\") pod \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\" (UID: \"313d0e82-09f0-4085-ac8b-9eafe564b8ec\") " Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.140352 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d0e82-09f0-4085-ac8b-9eafe564b8ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "313d0e82-09f0-4085-ac8b-9eafe564b8ec" (UID: "313d0e82-09f0-4085-ac8b-9eafe564b8ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.144107 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d0e82-09f0-4085-ac8b-9eafe564b8ec-kube-api-access-kz6ll" (OuterVolumeSpecName: "kube-api-access-kz6ll") pod "313d0e82-09f0-4085-ac8b-9eafe564b8ec" (UID: "313d0e82-09f0-4085-ac8b-9eafe564b8ec"). InnerVolumeSpecName "kube-api-access-kz6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.199338 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.209613 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.240693 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/313d0e82-09f0-4085-ac8b-9eafe564b8ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.240717 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz6ll\" (UniqueName: \"kubernetes.io/projected/313d0e82-09f0-4085-ac8b-9eafe564b8ec-kube-api-access-kz6ll\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.341552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-operator-scripts\") pod \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.342416 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5848b7-b291-4c54-a226-dfd4eedbea37-operator-scripts\") pod \"fe5848b7-b291-4c54-a226-dfd4eedbea37\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.342356 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "706cf667-a9da-4c0b-b0c2-8938db9f1b8c" (UID: "706cf667-a9da-4c0b-b0c2-8938db9f1b8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.343008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5848b7-b291-4c54-a226-dfd4eedbea37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe5848b7-b291-4c54-a226-dfd4eedbea37" (UID: "fe5848b7-b291-4c54-a226-dfd4eedbea37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.343081 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxjw\" (UniqueName: \"kubernetes.io/projected/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-kube-api-access-xpxjw\") pod \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\" (UID: \"706cf667-a9da-4c0b-b0c2-8938db9f1b8c\") " Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.343104 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn799\" (UniqueName: \"kubernetes.io/projected/fe5848b7-b291-4c54-a226-dfd4eedbea37-kube-api-access-fn799\") pod \"fe5848b7-b291-4c54-a226-dfd4eedbea37\" (UID: \"fe5848b7-b291-4c54-a226-dfd4eedbea37\") " Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.345009 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.345032 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5848b7-b291-4c54-a226-dfd4eedbea37-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.347551 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-kube-api-access-xpxjw" (OuterVolumeSpecName: "kube-api-access-xpxjw") pod "706cf667-a9da-4c0b-b0c2-8938db9f1b8c" (UID: "706cf667-a9da-4c0b-b0c2-8938db9f1b8c"). InnerVolumeSpecName "kube-api-access-xpxjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.347748 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5848b7-b291-4c54-a226-dfd4eedbea37-kube-api-access-fn799" (OuterVolumeSpecName: "kube-api-access-fn799") pod "fe5848b7-b291-4c54-a226-dfd4eedbea37" (UID: "fe5848b7-b291-4c54-a226-dfd4eedbea37"). InnerVolumeSpecName "kube-api-access-fn799". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.447381 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxjw\" (UniqueName: \"kubernetes.io/projected/706cf667-a9da-4c0b-b0c2-8938db9f1b8c-kube-api-access-xpxjw\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.447409 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn799\" (UniqueName: \"kubernetes.io/projected/fe5848b7-b291-4c54-a226-dfd4eedbea37-kube-api-access-fn799\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.506308 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-52hns" event={"ID":"fe5848b7-b291-4c54-a226-dfd4eedbea37","Type":"ContainerDied","Data":"c506995b7506040f6fb2b5e46eb7239c8fb6a7cdf99745c83fd6cb80316d9051"} Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.506353 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c506995b7506040f6fb2b5e46eb7239c8fb6a7cdf99745c83fd6cb80316d9051" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.506512 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-52hns" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.509300 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" event={"ID":"706cf667-a9da-4c0b-b0c2-8938db9f1b8c","Type":"ContainerDied","Data":"c90a692873bd2cda514ca7596e3999ea4ca12af530ab16ce832cbe1514a9bad5"} Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.509335 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90a692873bd2cda514ca7596e3999ea4ca12af530ab16ce832cbe1514a9bad5" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.509346 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2f7e-account-create-update-ssxfp" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.510893 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g4gc5" event={"ID":"313d0e82-09f0-4085-ac8b-9eafe564b8ec","Type":"ContainerDied","Data":"4192f58e566351f46d50270c20a18726e8d4ba7e053ffde672efea7323973826"} Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.510935 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4192f58e566351f46d50270c20a18726e8d4ba7e053ffde672efea7323973826" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.511100 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4gc5" Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.630882 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qhsfb-config-jrmsw"] Feb 16 21:13:07 crc kubenswrapper[4811]: W0216 21:13:07.643337 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204f6288_7ab9_4c5b_94c2_a1f0b90179f5.slice/crio-3b5c582cdcfa329979920ef2d0f0443b7c84de4bd77ca91f65dd0838bb7276b1 WatchSource:0}: Error finding container 3b5c582cdcfa329979920ef2d0f0443b7c84de4bd77ca91f65dd0838bb7276b1: Status 404 returned error can't find the container with id 3b5c582cdcfa329979920ef2d0f0443b7c84de4bd77ca91f65dd0838bb7276b1 Feb 16 21:13:07 crc kubenswrapper[4811]: I0216 21:13:07.901316 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.057712 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/609199ec-a876-41fd-835a-826bb246817d-operator-scripts\") pod \"609199ec-a876-41fd-835a-826bb246817d\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.058161 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jdpv\" (UniqueName: \"kubernetes.io/projected/609199ec-a876-41fd-835a-826bb246817d-kube-api-access-2jdpv\") pod \"609199ec-a876-41fd-835a-826bb246817d\" (UID: \"609199ec-a876-41fd-835a-826bb246817d\") " Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.059871 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609199ec-a876-41fd-835a-826bb246817d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "609199ec-a876-41fd-835a-826bb246817d" (UID: "609199ec-a876-41fd-835a-826bb246817d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.068175 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609199ec-a876-41fd-835a-826bb246817d-kube-api-access-2jdpv" (OuterVolumeSpecName: "kube-api-access-2jdpv") pod "609199ec-a876-41fd-835a-826bb246817d" (UID: "609199ec-a876-41fd-835a-826bb246817d"). InnerVolumeSpecName "kube-api-access-2jdpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.168419 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jdpv\" (UniqueName: \"kubernetes.io/projected/609199ec-a876-41fd-835a-826bb246817d-kube-api-access-2jdpv\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.168469 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/609199ec-a876-41fd-835a-826bb246817d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.523398 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7732-account-create-update-xfp4c" event={"ID":"609199ec-a876-41fd-835a-826bb246817d","Type":"ContainerDied","Data":"5d38df271f56d7bff8509bc77ccf5fa98f50c3897ae457590c852ede9b74a42e"} Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.523659 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d38df271f56d7bff8509bc77ccf5fa98f50c3897ae457590c852ede9b74a42e" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.523471 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7732-account-create-update-xfp4c" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.529592 4811 generic.go:334] "Generic (PLEG): container finished" podID="204f6288-7ab9-4c5b-94c2-a1f0b90179f5" containerID="2738f6783b3629445dab537bceac537d8bccdadccc2e8069fd323a0857e3381f" exitCode=0 Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.529656 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qhsfb-config-jrmsw" event={"ID":"204f6288-7ab9-4c5b-94c2-a1f0b90179f5","Type":"ContainerDied","Data":"2738f6783b3629445dab537bceac537d8bccdadccc2e8069fd323a0857e3381f"} Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.529693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qhsfb-config-jrmsw" event={"ID":"204f6288-7ab9-4c5b-94c2-a1f0b90179f5","Type":"ContainerStarted","Data":"3b5c582cdcfa329979920ef2d0f0443b7c84de4bd77ca91f65dd0838bb7276b1"} Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.941962 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2drks"] Feb 16 21:13:08 crc kubenswrapper[4811]: E0216 21:13:08.942562 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313d0e82-09f0-4085-ac8b-9eafe564b8ec" containerName="mariadb-database-create" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.942588 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="313d0e82-09f0-4085-ac8b-9eafe564b8ec" containerName="mariadb-database-create" Feb 16 21:13:08 crc kubenswrapper[4811]: E0216 21:13:08.942613 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5848b7-b291-4c54-a226-dfd4eedbea37" containerName="mariadb-database-create" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.942625 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5848b7-b291-4c54-a226-dfd4eedbea37" containerName="mariadb-database-create" Feb 16 21:13:08 crc kubenswrapper[4811]: E0216 21:13:08.942668 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609199ec-a876-41fd-835a-826bb246817d" containerName="mariadb-account-create-update" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.942679 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="609199ec-a876-41fd-835a-826bb246817d" containerName="mariadb-account-create-update" Feb 16 21:13:08 crc kubenswrapper[4811]: E0216 21:13:08.942706 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706cf667-a9da-4c0b-b0c2-8938db9f1b8c" containerName="mariadb-account-create-update" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.942718 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="706cf667-a9da-4c0b-b0c2-8938db9f1b8c" containerName="mariadb-account-create-update" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.944121 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="609199ec-a876-41fd-835a-826bb246817d" containerName="mariadb-account-create-update" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.944161 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5848b7-b291-4c54-a226-dfd4eedbea37" containerName="mariadb-database-create" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.944187 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="313d0e82-09f0-4085-ac8b-9eafe564b8ec" containerName="mariadb-database-create" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.944224 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="706cf667-a9da-4c0b-b0c2-8938db9f1b8c" containerName="mariadb-account-create-update" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.949615 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2drks" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.951772 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 21:13:08 crc kubenswrapper[4811]: I0216 21:13:08.955959 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2drks"] Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.084103 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-operator-scripts\") pod \"root-account-create-update-2drks\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " pod="openstack/root-account-create-update-2drks" Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.084242 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5rsg\" (UniqueName: \"kubernetes.io/projected/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-kube-api-access-z5rsg\") pod \"root-account-create-update-2drks\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " pod="openstack/root-account-create-update-2drks" Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.186584 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5rsg\" (UniqueName: \"kubernetes.io/projected/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-kube-api-access-z5rsg\") pod \"root-account-create-update-2drks\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " pod="openstack/root-account-create-update-2drks" Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.187240 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-operator-scripts\") pod \"root-account-create-update-2drks\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " pod="openstack/root-account-create-update-2drks" Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.188124 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-operator-scripts\") pod \"root-account-create-update-2drks\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " pod="openstack/root-account-create-update-2drks" Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.218311 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5rsg\" (UniqueName: \"kubernetes.io/projected/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-kube-api-access-z5rsg\") pod \"root-account-create-update-2drks\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " pod="openstack/root-account-create-update-2drks" Feb 16 21:13:09 crc kubenswrapper[4811]: I0216 21:13:09.294489 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2drks" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.694150 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.701853 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.724245 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.733558 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.741328 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.760965 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.820941 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29tpb\" (UniqueName: \"kubernetes.io/projected/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-kube-api-access-29tpb\") pod \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821016 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d61096-bb5d-43e1-ba73-1829b343aec7-operator-scripts\") pod \"70d61096-bb5d-43e1-ba73-1829b343aec7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821080 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-scripts\") pod \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821141 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run-ovn\") pod \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821170 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3e590c-550e-4dbc-a82b-8e81ac468062-operator-scripts\") pod \"fd3e590c-550e-4dbc-a82b-8e81ac468062\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821209 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run\") pod \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-operator-scripts\") pod \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\" (UID: \"6908fe5f-6f5a-4425-96fe-1b5d0998c02c\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821255 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "204f6288-7ab9-4c5b-94c2-a1f0b90179f5" (UID: "204f6288-7ab9-4c5b-94c2-a1f0b90179f5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821285 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgjd\" (UniqueName: \"kubernetes.io/projected/70d61096-bb5d-43e1-ba73-1829b343aec7-kube-api-access-rqgjd\") pod \"70d61096-bb5d-43e1-ba73-1829b343aec7\" (UID: \"70d61096-bb5d-43e1-ba73-1829b343aec7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run" (OuterVolumeSpecName: "var-run") pod "204f6288-7ab9-4c5b-94c2-a1f0b90179f5" (UID: "204f6288-7ab9-4c5b-94c2-a1f0b90179f5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821365 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-scripts\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821392 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-dispersionconf\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821417 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmlnq\" (UniqueName: \"kubernetes.io/projected/fd3e590c-550e-4dbc-a82b-8e81ac468062-kube-api-access-rmlnq\") pod \"fd3e590c-550e-4dbc-a82b-8e81ac468062\" (UID: \"fd3e590c-550e-4dbc-a82b-8e81ac468062\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821439 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b6c7641-19e7-4831-82d4-8eda499301b7-etc-swift\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821462 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rsj\" (UniqueName: \"kubernetes.io/projected/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-kube-api-access-w2rsj\") pod \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821503 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-ring-data-devices\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821527 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqqcq\" (UniqueName: \"kubernetes.io/projected/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-kube-api-access-vqqcq\") pod \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-log-ovn\") pod \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-additional-scripts\") pod \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\" (UID: \"204f6288-7ab9-4c5b-94c2-a1f0b90179f5\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821594 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-swiftconf\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821658 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-operator-scripts\") pod \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\" (UID: \"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821692 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bps6w\" (UniqueName: \"kubernetes.io/projected/8b6c7641-19e7-4831-82d4-8eda499301b7-kube-api-access-bps6w\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821711 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-combined-ca-bundle\") pod \"8b6c7641-19e7-4831-82d4-8eda499301b7\" (UID: \"8b6c7641-19e7-4831-82d4-8eda499301b7\") " Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821783 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d61096-bb5d-43e1-ba73-1829b343aec7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70d61096-bb5d-43e1-ba73-1829b343aec7" (UID: "70d61096-bb5d-43e1-ba73-1829b343aec7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.821994 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "204f6288-7ab9-4c5b-94c2-a1f0b90179f5" (UID: "204f6288-7ab9-4c5b-94c2-a1f0b90179f5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.822181 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d61096-bb5d-43e1-ba73-1829b343aec7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.822230 4811 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.822250 4811 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.822268 4811 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.822522 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6c7641-19e7-4831-82d4-8eda499301b7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.823148 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "204f6288-7ab9-4c5b-94c2-a1f0b90179f5" (UID: "204f6288-7ab9-4c5b-94c2-a1f0b90179f5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.823325 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-scripts" (OuterVolumeSpecName: "scripts") pod "204f6288-7ab9-4c5b-94c2-a1f0b90179f5" (UID: "204f6288-7ab9-4c5b-94c2-a1f0b90179f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.823619 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd3e590c-550e-4dbc-a82b-8e81ac468062-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd3e590c-550e-4dbc-a82b-8e81ac468062" (UID: "fd3e590c-550e-4dbc-a82b-8e81ac468062"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.824032 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" (UID: "62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.824142 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.824626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6908fe5f-6f5a-4425-96fe-1b5d0998c02c" (UID: "6908fe5f-6f5a-4425-96fe-1b5d0998c02c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.825333 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="5f050753-85f4-413e-92b6-0503db5e7391" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.828855 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-kube-api-access-vqqcq" (OuterVolumeSpecName: "kube-api-access-vqqcq") pod "204f6288-7ab9-4c5b-94c2-a1f0b90179f5" (UID: "204f6288-7ab9-4c5b-94c2-a1f0b90179f5"). InnerVolumeSpecName "kube-api-access-vqqcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.836301 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3e590c-550e-4dbc-a82b-8e81ac468062-kube-api-access-rmlnq" (OuterVolumeSpecName: "kube-api-access-rmlnq") pod "fd3e590c-550e-4dbc-a82b-8e81ac468062" (UID: "fd3e590c-550e-4dbc-a82b-8e81ac468062"). InnerVolumeSpecName "kube-api-access-rmlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.836634 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-kube-api-access-w2rsj" (OuterVolumeSpecName: "kube-api-access-w2rsj") pod "62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" (UID: "62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8"). InnerVolumeSpecName "kube-api-access-w2rsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.838596 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d61096-bb5d-43e1-ba73-1829b343aec7-kube-api-access-rqgjd" (OuterVolumeSpecName: "kube-api-access-rqgjd") pod "70d61096-bb5d-43e1-ba73-1829b343aec7" (UID: "70d61096-bb5d-43e1-ba73-1829b343aec7"). InnerVolumeSpecName "kube-api-access-rqgjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.844374 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-kube-api-access-29tpb" (OuterVolumeSpecName: "kube-api-access-29tpb") pod "6908fe5f-6f5a-4425-96fe-1b5d0998c02c" (UID: "6908fe5f-6f5a-4425-96fe-1b5d0998c02c"). InnerVolumeSpecName "kube-api-access-29tpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.846436 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.856799 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6c7641-19e7-4831-82d4-8eda499301b7-kube-api-access-bps6w" (OuterVolumeSpecName: "kube-api-access-bps6w") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "kube-api-access-bps6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.861232 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.864740 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-scripts" (OuterVolumeSpecName: "scripts") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.877157 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b6c7641-19e7-4831-82d4-8eda499301b7" (UID: "8b6c7641-19e7-4831-82d4-8eda499301b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.923990 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924025 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bps6w\" (UniqueName: \"kubernetes.io/projected/8b6c7641-19e7-4831-82d4-8eda499301b7-kube-api-access-bps6w\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924039 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924052 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29tpb\" (UniqueName: \"kubernetes.io/projected/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-kube-api-access-29tpb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924064 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924208 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3e590c-550e-4dbc-a82b-8e81ac468062-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924223 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6908fe5f-6f5a-4425-96fe-1b5d0998c02c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924235 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgjd\" (UniqueName: \"kubernetes.io/projected/70d61096-bb5d-43e1-ba73-1829b343aec7-kube-api-access-rqgjd\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924248 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924259 4811 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924270 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmlnq\" (UniqueName: \"kubernetes.io/projected/fd3e590c-550e-4dbc-a82b-8e81ac468062-kube-api-access-rmlnq\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924280 4811 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b6c7641-19e7-4831-82d4-8eda499301b7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924291 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2rsj\" (UniqueName: \"kubernetes.io/projected/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8-kube-api-access-w2rsj\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924304 4811 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b6c7641-19e7-4831-82d4-8eda499301b7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924317 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqqcq\" (UniqueName: \"kubernetes.io/projected/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-kube-api-access-vqqcq\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924328 4811 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/204f6288-7ab9-4c5b-94c2-a1f0b90179f5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:10 crc kubenswrapper[4811]: I0216 21:13:10.924338 4811 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b6c7641-19e7-4831-82d4-8eda499301b7-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.455932 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qhsfb" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.573688 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vx8vr" event={"ID":"6908fe5f-6f5a-4425-96fe-1b5d0998c02c","Type":"ContainerDied","Data":"0eb9582c1589389c2e1bae39528257c6d876a23d82931ee25955986344d12062"} Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.573738 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb9582c1589389c2e1bae39528257c6d876a23d82931ee25955986344d12062" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.573809 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vx8vr" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.576692 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qhsfb-config-jrmsw" event={"ID":"204f6288-7ab9-4c5b-94c2-a1f0b90179f5","Type":"ContainerDied","Data":"3b5c582cdcfa329979920ef2d0f0443b7c84de4bd77ca91f65dd0838bb7276b1"} Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.576766 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5c582cdcfa329979920ef2d0f0443b7c84de4bd77ca91f65dd0838bb7276b1" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.576793 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qhsfb-config-jrmsw" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.579616 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7dp7" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.579621 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7dp7" event={"ID":"70d61096-bb5d-43e1-ba73-1829b343aec7","Type":"ContainerDied","Data":"73af0372d13ce758d1c8ea7e9d671a68b476eb8c6ac19015976f948508bb1f9d"} Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.579822 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73af0372d13ce758d1c8ea7e9d671a68b476eb8c6ac19015976f948508bb1f9d" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.582733 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0e26-account-create-update-28c89" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.582742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0e26-account-create-update-28c89" event={"ID":"62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8","Type":"ContainerDied","Data":"d572c42c2e89c047a9b735d89cc62aca024042b9f2b491dc3bd7ae5d6ea69763"} Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.582795 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d572c42c2e89c047a9b735d89cc62aca024042b9f2b491dc3bd7ae5d6ea69763" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.589060 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7dnxf" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.589556 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7dnxf" event={"ID":"8b6c7641-19e7-4831-82d4-8eda499301b7","Type":"ContainerDied","Data":"1c8c9ded0fa3c4f14b181f7145434d2cfad7933bf61715d30d1babc04da74195"} Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.589604 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c8c9ded0fa3c4f14b181f7145434d2cfad7933bf61715d30d1babc04da74195" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.592703 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ac65-account-create-update-cg9vg" event={"ID":"fd3e590c-550e-4dbc-a82b-8e81ac468062","Type":"ContainerDied","Data":"43a15cbb8390d003fe6d21d2d97549fee71f5c8627b7531f4ecee18f16b044bc"} Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.592917 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ac65-account-create-update-cg9vg" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.593155 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a15cbb8390d003fe6d21d2d97549fee71f5c8627b7531f4ecee18f16b044bc" Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.867436 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qhsfb-config-jrmsw"] Feb 16 21:13:11 crc kubenswrapper[4811]: I0216 21:13:11.877178 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qhsfb-config-jrmsw"] Feb 16 21:13:12 crc kubenswrapper[4811]: I0216 21:13:12.718900 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204f6288-7ab9-4c5b-94c2-a1f0b90179f5" path="/var/lib/kubelet/pods/204f6288-7ab9-4c5b-94c2-a1f0b90179f5/volumes" Feb 16 21:13:13 crc kubenswrapper[4811]: I0216 21:13:13.655206 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:14 crc kubenswrapper[4811]: I0216 21:13:14.200993 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:14 crc kubenswrapper[4811]: I0216 21:13:14.631335 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.006505 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.007489 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="config-reloader" containerID="cri-o://96c3b6637bc5d3056022600925d1249a04f728a2ee5378fa24e7c38a7ed2164c" gracePeriod=600 Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.007516 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="thanos-sidecar" containerID="cri-o://68a1d2ba818f8b0f33b8d4c4e14b581a3bd432fbb1a28a786277d1a475f460f2" gracePeriod=600 Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.007417 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="prometheus" containerID="cri-o://93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf" gracePeriod=600 Feb 16 21:13:18 crc kubenswrapper[4811]: E0216 21:13:18.266115 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4247055a_8ca2_4a03_9a3a_d582d674b38a.slice/crio-93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4247055a_8ca2_4a03_9a3a_d582d674b38a.slice/crio-conmon-93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4247055a_8ca2_4a03_9a3a_d582d674b38a.slice/crio-96c3b6637bc5d3056022600925d1249a04f728a2ee5378fa24e7c38a7ed2164c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.655618 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.681350 4811 generic.go:334] "Generic (PLEG): container finished" podID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerID="68a1d2ba818f8b0f33b8d4c4e14b581a3bd432fbb1a28a786277d1a475f460f2" exitCode=0 Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.681380 4811 generic.go:334] "Generic (PLEG): container finished" podID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerID="96c3b6637bc5d3056022600925d1249a04f728a2ee5378fa24e7c38a7ed2164c" exitCode=0 Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.681388 4811 generic.go:334] "Generic (PLEG): container finished" podID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerID="93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf" exitCode=0 Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.681407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerDied","Data":"68a1d2ba818f8b0f33b8d4c4e14b581a3bd432fbb1a28a786277d1a475f460f2"} Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.681429 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerDied","Data":"96c3b6637bc5d3056022600925d1249a04f728a2ee5378fa24e7c38a7ed2164c"} Feb 16 21:13:18 crc kubenswrapper[4811]: I0216 21:13:18.681439 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerDied","Data":"93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf"} Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.199356 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.315730 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-web-config\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.315800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-1\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.315840 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-thanos-prometheus-http-client-file\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.315971 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.316041 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-2\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.316058 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-config\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.316084 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-tls-assets\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.316128 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4247055a-8ca2-4a03-9a3a-d582d674b38a-config-out\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.316221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-0\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.316257 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgs5z\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-kube-api-access-lgs5z\") pod \"4247055a-8ca2-4a03-9a3a-d582d674b38a\" (UID: \"4247055a-8ca2-4a03-9a3a-d582d674b38a\") " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.317709 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.323814 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.324070 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.326589 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.327902 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-config" (OuterVolumeSpecName: "config") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.333052 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.333544 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2drks"] Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.334494 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-kube-api-access-lgs5z" (OuterVolumeSpecName: "kube-api-access-lgs5z") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "kube-api-access-lgs5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.338407 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4247055a-8ca2-4a03-9a3a-d582d674b38a-config-out" (OuterVolumeSpecName: "config-out") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.386608 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-web-config" (OuterVolumeSpecName: "web-config") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.416055 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "4247055a-8ca2-4a03-9a3a-d582d674b38a" (UID: "4247055a-8ca2-4a03-9a3a-d582d674b38a"). InnerVolumeSpecName "pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418471 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418511 4811 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418524 4811 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418536 4811 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4247055a-8ca2-4a03-9a3a-d582d674b38a-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418545 4811 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418559 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgs5z\" (UniqueName: \"kubernetes.io/projected/4247055a-8ca2-4a03-9a3a-d582d674b38a-kube-api-access-lgs5z\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418572 4811 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418599 4811 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4247055a-8ca2-4a03-9a3a-d582d674b38a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418612 4811 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4247055a-8ca2-4a03-9a3a-d582d674b38a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.418658 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") on node \"crc\" " Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.454761 4811 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.454920 4811 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3") on node "crc" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.520171 4811 reconciler_common.go:293] "Volume detached for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.693179 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2drks" event={"ID":"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe","Type":"ContainerStarted","Data":"80ff26fc1997c91f922c78965c5dbdece23f16852a22210f747b539e8d734331"} Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.693322 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2drks" event={"ID":"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe","Type":"ContainerStarted","Data":"bba1ed2d4d35c1251eccebc4d1a23fcdecdd0960b4eafb4af957488f87426e35"} Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.694915 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx82t" event={"ID":"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137","Type":"ContainerStarted","Data":"5e299b53bfb4a24c4ae0c44540b5106081d775033193a3bf8fa260de54391459"} Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.696497 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmfvr" event={"ID":"44ff615c-b0ce-42f1-b01a-7a59d64dacc1","Type":"ContainerStarted","Data":"bc35dfaa32f2c323ce09c949a19d8a2d682b9c0061ba49203b45ef63e29fa721"} Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.699645 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4247055a-8ca2-4a03-9a3a-d582d674b38a","Type":"ContainerDied","Data":"6d8fb14050b5799345e1524def7a0c2c30e0adf6e124a95cff86937c1ed6cf40"} Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.699688 4811 scope.go:117] "RemoveContainer" containerID="68a1d2ba818f8b0f33b8d4c4e14b581a3bd432fbb1a28a786277d1a475f460f2" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.699828 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.735486 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2drks" podStartSLOduration=11.735464286 podStartE2EDuration="11.735464286s" podCreationTimestamp="2026-02-16 21:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:19.721344781 +0000 UTC m=+1017.650640729" watchObservedRunningTime="2026-02-16 21:13:19.735464286 +0000 UTC m=+1017.664760224" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.740474 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fx82t" podStartSLOduration=2.9589475419999998 podStartE2EDuration="19.740456622s" podCreationTimestamp="2026-02-16 21:13:00 +0000 UTC" firstStartedPulling="2026-02-16 21:13:02.060974878 +0000 UTC m=+999.990270816" lastFinishedPulling="2026-02-16 21:13:18.842483958 +0000 UTC m=+1016.771779896" observedRunningTime="2026-02-16 21:13:19.737907847 +0000 UTC m=+1017.667203785" watchObservedRunningTime="2026-02-16 21:13:19.740456622 +0000 UTC m=+1017.669752570" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.758085 4811 scope.go:117] "RemoveContainer" containerID="96c3b6637bc5d3056022600925d1249a04f728a2ee5378fa24e7c38a7ed2164c" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.789178 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rmfvr" podStartSLOduration=3.231054864 podStartE2EDuration="16.789151166s" podCreationTimestamp="2026-02-16 21:13:03 +0000 UTC" firstStartedPulling="2026-02-16 21:13:05.213428821 +0000 UTC m=+1003.142724759" lastFinishedPulling="2026-02-16 21:13:18.771525083 +0000 UTC m=+1016.700821061" observedRunningTime="2026-02-16 21:13:19.758391293 +0000 UTC m=+1017.687687241" watchObservedRunningTime="2026-02-16 21:13:19.789151166 +0000 UTC m=+1017.718447104" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.817464 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.819539 4811 scope.go:117] "RemoveContainer" containerID="93b124e7caf16e25118f2236123f3af54ed98788aec76345c4753f01db043fdf" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.834275 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842235 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842621 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6908fe5f-6f5a-4425-96fe-1b5d0998c02c" containerName="mariadb-database-create" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842639 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6908fe5f-6f5a-4425-96fe-1b5d0998c02c" containerName="mariadb-database-create" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842648 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d61096-bb5d-43e1-ba73-1829b343aec7" containerName="mariadb-database-create" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842656 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d61096-bb5d-43e1-ba73-1829b343aec7" containerName="mariadb-database-create" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842671 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" containerName="mariadb-account-create-update" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842677 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" containerName="mariadb-account-create-update" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842687 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6c7641-19e7-4831-82d4-8eda499301b7" containerName="swift-ring-rebalance" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842693 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6c7641-19e7-4831-82d4-8eda499301b7" containerName="swift-ring-rebalance" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842704 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="init-config-reloader" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842711 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="init-config-reloader" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842721 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3e590c-550e-4dbc-a82b-8e81ac468062" containerName="mariadb-account-create-update" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842727 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3e590c-550e-4dbc-a82b-8e81ac468062" containerName="mariadb-account-create-update" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842736 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="thanos-sidecar" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842741 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="thanos-sidecar" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842756 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="config-reloader" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842762 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="config-reloader" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842775 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204f6288-7ab9-4c5b-94c2-a1f0b90179f5" containerName="ovn-config" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842782 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="204f6288-7ab9-4c5b-94c2-a1f0b90179f5" containerName="ovn-config" Feb 16 21:13:19 crc kubenswrapper[4811]: E0216 21:13:19.842791 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="prometheus" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842797 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="prometheus" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842948 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6908fe5f-6f5a-4425-96fe-1b5d0998c02c" containerName="mariadb-database-create" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842963 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6c7641-19e7-4831-82d4-8eda499301b7" containerName="swift-ring-rebalance" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842973 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d61096-bb5d-43e1-ba73-1829b343aec7" containerName="mariadb-database-create" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842981 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="prometheus" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.842991 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3e590c-550e-4dbc-a82b-8e81ac468062" containerName="mariadb-account-create-update" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.843002 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" containerName="mariadb-account-create-update" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.843011 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="config-reloader" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.843024 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="204f6288-7ab9-4c5b-94c2-a1f0b90179f5" containerName="ovn-config" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.843031 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" containerName="thanos-sidecar" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.844851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.846626 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.846833 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.850453 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.850593 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.850628 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.850738 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.852425 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-p56vd" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.852573 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.853106 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.857023 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.903216 4811 scope.go:117] "RemoveContainer" containerID="1ecbba720783e3c6c08a9da6626dfdffbc9cf13424e8958bd036af88a0d5c304" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.927989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928145 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928252 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928363 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928592 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928692 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928793 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nwt\" (UniqueName: \"kubernetes.io/projected/e994011a-8ba4-4eed-9c4c-5ddac8b43325-kube-api-access-67nwt\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.928936 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e994011a-8ba4-4eed-9c4c-5ddac8b43325-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.929043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-config\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.929142 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.929251 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e994011a-8ba4-4eed-9c4c-5ddac8b43325-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:19 crc kubenswrapper[4811]: I0216 21:13:19.929352 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e994011a-8ba4-4eed-9c4c-5ddac8b43325-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031218 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-config\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031256 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031282 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e994011a-8ba4-4eed-9c4c-5ddac8b43325-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031305 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031361 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031379 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031442 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031465 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031489 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.031517 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nwt\" (UniqueName: \"kubernetes.io/projected/e994011a-8ba4-4eed-9c4c-5ddac8b43325-kube-api-access-67nwt\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.033305 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.033717 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.034546 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e994011a-8ba4-4eed-9c4c-5ddac8b43325-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.035938 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e994011a-8ba4-4eed-9c4c-5ddac8b43325-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.036610 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.036629 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-config\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.038653 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.040849 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e994011a-8ba4-4eed-9c4c-5ddac8b43325-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.042687 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.043296 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.046162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e994011a-8ba4-4eed-9c4c-5ddac8b43325-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.046094 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.046326 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18767a3611d798f8934d1c357327d08a5ff746f9fb9afdbc502a0d35823d9e91/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.056501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nwt\" (UniqueName: \"kubernetes.io/projected/e994011a-8ba4-4eed-9c4c-5ddac8b43325-kube-api-access-67nwt\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.098265 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bcc5437-f61d-4c30-a2fb-514fee9806b3\") pod \"prometheus-metric-storage-0\" (UID: \"e994011a-8ba4-4eed-9c4c-5ddac8b43325\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.197029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.336973 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.343239 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3590443c-c5fd-4eec-a144-06cddd956651-etc-swift\") pod \"swift-storage-0\" (UID: \"3590443c-c5fd-4eec-a144-06cddd956651\") " pod="openstack/swift-storage-0" Feb 16 21:13:20 crc kubenswrapper[4811]: I0216 21:13:20.517831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:13:21 crc kubenswrapper[4811]: W0216 21:13:20.662919 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode994011a_8ba4_4eed_9c4c_5ddac8b43325.slice/crio-9b99bfedaf83e9906f004950ba98acc4dd792a26c009f071d42040b8cffb0505 WatchSource:0}: Error finding container 9b99bfedaf83e9906f004950ba98acc4dd792a26c009f071d42040b8cffb0505: Status 404 returned error can't find the container with id 9b99bfedaf83e9906f004950ba98acc4dd792a26c009f071d42040b8cffb0505 Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:20.668904 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:20.722349 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4247055a-8ca2-4a03-9a3a-d582d674b38a" path="/var/lib/kubelet/pods/4247055a-8ca2-4a03-9a3a-d582d674b38a/volumes" Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:20.727534 4811 generic.go:334] "Generic (PLEG): container finished" podID="22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" containerID="80ff26fc1997c91f922c78965c5dbdece23f16852a22210f747b539e8d734331" exitCode=0 Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:20.743782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e994011a-8ba4-4eed-9c4c-5ddac8b43325","Type":"ContainerStarted","Data":"9b99bfedaf83e9906f004950ba98acc4dd792a26c009f071d42040b8cffb0505"} Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:20.743884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2drks" event={"ID":"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe","Type":"ContainerDied","Data":"80ff26fc1997c91f922c78965c5dbdece23f16852a22210f747b539e8d734331"} Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:20.824626 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="5f050753-85f4-413e-92b6-0503db5e7391" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:13:21 crc kubenswrapper[4811]: I0216 21:13:21.804411 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.035349 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2drks" Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.078220 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-operator-scripts\") pod \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.078550 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsg\" (UniqueName: \"kubernetes.io/projected/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-kube-api-access-z5rsg\") pod \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\" (UID: \"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe\") " Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.079515 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" (UID: "22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.087409 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-kube-api-access-z5rsg" (OuterVolumeSpecName: "kube-api-access-z5rsg") pod "22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" (UID: "22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe"). InnerVolumeSpecName "kube-api-access-z5rsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.180731 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5rsg\" (UniqueName: \"kubernetes.io/projected/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-kube-api-access-z5rsg\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.180761 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.748097 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"4aa6dd1a69d3dcc21834932dfcfb301cd02a025541c8fda17b11c6c0306261e1"} Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.749946 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2drks" event={"ID":"22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe","Type":"ContainerDied","Data":"bba1ed2d4d35c1251eccebc4d1a23fcdecdd0960b4eafb4af957488f87426e35"} Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.749980 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba1ed2d4d35c1251eccebc4d1a23fcdecdd0960b4eafb4af957488f87426e35" Feb 16 21:13:22 crc kubenswrapper[4811]: I0216 21:13:22.750024 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2drks" Feb 16 21:13:23 crc kubenswrapper[4811]: I0216 21:13:23.785722 4811 generic.go:334] "Generic (PLEG): container finished" podID="44ff615c-b0ce-42f1-b01a-7a59d64dacc1" containerID="bc35dfaa32f2c323ce09c949a19d8a2d682b9c0061ba49203b45ef63e29fa721" exitCode=0 Feb 16 21:13:23 crc kubenswrapper[4811]: I0216 21:13:23.785776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmfvr" event={"ID":"44ff615c-b0ce-42f1-b01a-7a59d64dacc1","Type":"ContainerDied","Data":"bc35dfaa32f2c323ce09c949a19d8a2d682b9c0061ba49203b45ef63e29fa721"} Feb 16 21:13:23 crc kubenswrapper[4811]: I0216 21:13:23.788550 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"73260fad3aa38b5776615d82e23aa327282bcd85835f908167955caa71dc6839"} Feb 16 21:13:24 crc kubenswrapper[4811]: I0216 21:13:24.800330 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e994011a-8ba4-4eed-9c4c-5ddac8b43325","Type":"ContainerStarted","Data":"8acc4470886750c5f5d4a45e4b3a6b962ab17b59aa9148c75689ec08e8d5711e"} Feb 16 21:13:24 crc kubenswrapper[4811]: I0216 21:13:24.803824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"ada0092166479f8fd1d7a56f613bb6087b184e10258b75f4500b1db496fb5a5f"} Feb 16 21:13:24 crc kubenswrapper[4811]: I0216 21:13:24.803867 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"582d5015c25efffb803245d1e5d22928232f7cdc0ebf620024fd46d1c2cac5dd"} Feb 16 21:13:24 crc kubenswrapper[4811]: I0216 21:13:24.803877 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"bd4f7bd526abc49e083018f3c86657fd608c6bf029e25ff2d5659ebedc3ca3dc"} Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.451790 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.551774 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-combined-ca-bundle\") pod \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.551969 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-config-data\") pod \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.552037 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jhxl\" (UniqueName: \"kubernetes.io/projected/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-kube-api-access-4jhxl\") pod \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\" (UID: \"44ff615c-b0ce-42f1-b01a-7a59d64dacc1\") " Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.556368 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-kube-api-access-4jhxl" (OuterVolumeSpecName: "kube-api-access-4jhxl") pod "44ff615c-b0ce-42f1-b01a-7a59d64dacc1" (UID: "44ff615c-b0ce-42f1-b01a-7a59d64dacc1"). InnerVolumeSpecName "kube-api-access-4jhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.580518 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44ff615c-b0ce-42f1-b01a-7a59d64dacc1" (UID: "44ff615c-b0ce-42f1-b01a-7a59d64dacc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.598000 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-config-data" (OuterVolumeSpecName: "config-data") pod "44ff615c-b0ce-42f1-b01a-7a59d64dacc1" (UID: "44ff615c-b0ce-42f1-b01a-7a59d64dacc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.654581 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.654654 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jhxl\" (UniqueName: \"kubernetes.io/projected/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-kube-api-access-4jhxl\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.654669 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ff615c-b0ce-42f1-b01a-7a59d64dacc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.814819 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"293d0f17b93b9f55e5c419fbc8b1f268695a300c34b7b6eaac4d26d246ef5d72"} Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.814869 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"eca87ee8519a1be5a852deb9771967e61d9539990906f5d2dc04520a4d1def56"} Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.817950 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmfvr" Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.826295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmfvr" event={"ID":"44ff615c-b0ce-42f1-b01a-7a59d64dacc1","Type":"ContainerDied","Data":"36d348b68b73abba028ecab1ba3f4e6f87755fec0d31cea7ab6bf5207a359e1e"} Feb 16 21:13:25 crc kubenswrapper[4811]: I0216 21:13:25.826373 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d348b68b73abba028ecab1ba3f4e6f87755fec0d31cea7ab6bf5207a359e1e" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.071850 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-9grpj"] Feb 16 21:13:26 crc kubenswrapper[4811]: E0216 21:13:26.072277 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" containerName="mariadb-account-create-update" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.072290 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" containerName="mariadb-account-create-update" Feb 16 21:13:26 crc kubenswrapper[4811]: E0216 21:13:26.072317 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ff615c-b0ce-42f1-b01a-7a59d64dacc1" containerName="keystone-db-sync" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.072325 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ff615c-b0ce-42f1-b01a-7a59d64dacc1" containerName="keystone-db-sync" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.072531 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" containerName="mariadb-account-create-update" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.072563 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ff615c-b0ce-42f1-b01a-7a59d64dacc1" containerName="keystone-db-sync" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.079835 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.087604 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-9grpj"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.101408 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4r8qf"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.103204 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.105331 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.105498 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s2qbh" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.105619 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.106967 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.107038 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.146623 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4r8qf"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.167477 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.167524 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-credential-keys\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.167547 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.167595 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-config\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.167684 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-combined-ca-bundle\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.168274 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-fernet-keys\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.168343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9z7\" (UniqueName: \"kubernetes.io/projected/e68df5a8-d13c-4c3c-ac36-b791cb990881-kube-api-access-nq9z7\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.168413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t4p\" (UniqueName: \"kubernetes.io/projected/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-kube-api-access-56t4p\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.168505 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-config-data\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.168570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-scripts\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.168657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270338 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-config-data\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270387 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-scripts\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270484 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-credential-keys\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270523 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270564 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-config\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-combined-ca-bundle\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-fernet-keys\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270618 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9z7\" (UniqueName: \"kubernetes.io/projected/e68df5a8-d13c-4c3c-ac36-b791cb990881-kube-api-access-nq9z7\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.270641 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56t4p\" (UniqueName: \"kubernetes.io/projected/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-kube-api-access-56t4p\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.272008 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.272005 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.272858 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.272922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-config\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.283785 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-scripts\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.284340 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-credential-keys\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.284899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-fernet-keys\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.288632 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-combined-ca-bundle\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.289035 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-config-data\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.294287 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9z7\" (UniqueName: \"kubernetes.io/projected/e68df5a8-d13c-4c3c-ac36-b791cb990881-kube-api-access-nq9z7\") pod \"keystone-bootstrap-4r8qf\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.300258 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56t4p\" (UniqueName: \"kubernetes.io/projected/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-kube-api-access-56t4p\") pod \"dnsmasq-dns-5c9d85d47c-9grpj\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.308449 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qv84d"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.309595 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.312281 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.312482 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5x9lq" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.312606 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.330278 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.332337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.341783 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.341997 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.342388 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qv84d"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxzw2\" (UniqueName: \"kubernetes.io/projected/18bbdf69-d815-49b8-a29d-8b90a8e2987f-kube-api-access-nxzw2\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379665 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-scripts\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379714 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379748 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-config-data\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379775 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-combined-ca-bundle\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379820 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-etc-machine-id\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-log-httpd\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379880 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-db-sync-config-data\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-config-data\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379945 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-scripts\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.379972 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-run-httpd\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.380002 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882fb\" (UniqueName: \"kubernetes.io/projected/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-kube-api-access-882fb\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.408190 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.423905 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.457809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491026 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-scripts\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491106 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-run-httpd\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882fb\" (UniqueName: \"kubernetes.io/projected/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-kube-api-access-882fb\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491277 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxzw2\" (UniqueName: \"kubernetes.io/projected/18bbdf69-d815-49b8-a29d-8b90a8e2987f-kube-api-access-nxzw2\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491329 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-scripts\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491351 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491447 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-config-data\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491476 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-combined-ca-bundle\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491534 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-etc-machine-id\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491577 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-log-httpd\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-db-sync-config-data\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.491655 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-config-data\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.508794 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-etc-machine-id\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.509357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-run-httpd\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.509575 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-log-httpd\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.510553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-config-data\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.515572 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-config-data\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.516447 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.516759 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-combined-ca-bundle\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.522951 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-scripts\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.529654 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-x49kk"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.530917 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.535294 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-8cqxm" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.535405 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.535539 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.535921 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.539486 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-x49kk"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.541985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-scripts\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.548261 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gbrql"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.548861 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.554917 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gbrql"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.559644 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.563057 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-njvjn"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.563393 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.563711 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lvcs7" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.564011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-db-sync-config-data\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.565026 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.565689 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.588837 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.588991 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wfvl6" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.591480 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxzw2\" (UniqueName: \"kubernetes.io/projected/18bbdf69-d815-49b8-a29d-8b90a8e2987f-kube-api-access-nxzw2\") pod \"ceilometer-0\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.591741 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882fb\" (UniqueName: \"kubernetes.io/projected/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-kube-api-access-882fb\") pod \"cinder-db-sync-qv84d\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592807 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-scripts\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592852 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-combined-ca-bundle\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592875 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-combined-ca-bundle\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592899 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcb2p\" (UniqueName: \"kubernetes.io/projected/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-kube-api-access-gcb2p\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5t7\" (UniqueName: \"kubernetes.io/projected/89a1f359-cb47-470b-ad6e-48d11efacfce-kube-api-access-5d5t7\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592943 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-config\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.592989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-db-sync-config-data\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.593010 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-config-data\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.593053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56zx\" (UniqueName: \"kubernetes.io/projected/46d0afcb-2a14-4e67-89fc-ed848d1637ce-kube-api-access-s56zx\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.593071 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/46d0afcb-2a14-4e67-89fc-ed848d1637ce-certs\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.593101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-combined-ca-bundle\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.651750 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-njvjn"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.659972 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8xm4f"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.665993 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.668098 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-57xcn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.668282 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.668449 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.675545 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8xm4f"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.677143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qv84d" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.687097 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-9grpj"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.694969 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-scripts\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695027 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-db-sync-config-data\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-config-data\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s56zx\" (UniqueName: \"kubernetes.io/projected/46d0afcb-2a14-4e67-89fc-ed848d1637ce-kube-api-access-s56zx\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/46d0afcb-2a14-4e67-89fc-ed848d1637ce-certs\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695158 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-combined-ca-bundle\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695190 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-combined-ca-bundle\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-scripts\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3518c12b-e37a-4c8d-bbb5-c84f79d45948-logs\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695331 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-combined-ca-bundle\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695352 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-combined-ca-bundle\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695374 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcb2p\" (UniqueName: \"kubernetes.io/projected/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-kube-api-access-gcb2p\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695395 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcx4c\" (UniqueName: \"kubernetes.io/projected/3518c12b-e37a-4c8d-bbb5-c84f79d45948-kube-api-access-bcx4c\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695415 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-config-data\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5t7\" (UniqueName: \"kubernetes.io/projected/89a1f359-cb47-470b-ad6e-48d11efacfce-kube-api-access-5d5t7\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.695494 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-config\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.698722 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-config\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.700025 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-db-sync-config-data\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.701957 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.707575 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/46d0afcb-2a14-4e67-89fc-ed848d1637ce-certs\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.709041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-config-data\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.713892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-scripts\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.727837 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcb2p\" (UniqueName: \"kubernetes.io/projected/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-kube-api-access-gcb2p\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.730510 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56zx\" (UniqueName: \"kubernetes.io/projected/46d0afcb-2a14-4e67-89fc-ed848d1637ce-kube-api-access-s56zx\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.730945 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5t7\" (UniqueName: \"kubernetes.io/projected/89a1f359-cb47-470b-ad6e-48d11efacfce-kube-api-access-5d5t7\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.763165 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-combined-ca-bundle\") pod \"barbican-db-sync-njvjn\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.763889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d0afcb-2a14-4e67-89fc-ed848d1637ce-combined-ca-bundle\") pod \"cloudkitty-db-sync-x49kk\" (UID: \"46d0afcb-2a14-4e67-89fc-ed848d1637ce\") " pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.764795 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-njvjn" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.767924 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pxsxp"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.769320 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-combined-ca-bundle\") pod \"neutron-db-sync-gbrql\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.792580 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.795930 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pxsxp"] Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.796924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3518c12b-e37a-4c8d-bbb5-c84f79d45948-logs\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797021 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcx4c\" (UniqueName: \"kubernetes.io/projected/3518c12b-e37a-4c8d-bbb5-c84f79d45948-kube-api-access-bcx4c\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797082 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-config-data\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797152 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-config\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797183 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-scripts\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2dq9\" (UniqueName: \"kubernetes.io/projected/e927c15d-6ca1-4473-a79a-52d223380f18-kube-api-access-h2dq9\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797333 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797375 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.797395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-combined-ca-bundle\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.798630 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3518c12b-e37a-4c8d-bbb5-c84f79d45948-logs\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.815076 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-config-data\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.815655 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-scripts\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.817370 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-combined-ca-bundle\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.845818 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcx4c\" (UniqueName: \"kubernetes.io/projected/3518c12b-e37a-4c8d-bbb5-c84f79d45948-kube-api-access-bcx4c\") pod \"placement-db-sync-8xm4f\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.872870 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"4c11388053942721edcc70370b6068b107f1f8c615e798f08a32b3ce6ae862d3"} Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.872918 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"f749f51dd41a238bd64110a45dc611e4dd3f6163de07e8479340f26f7b4cbedb"} Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.875680 4811 generic.go:334] "Generic (PLEG): container finished" podID="3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" containerID="5e299b53bfb4a24c4ae0c44540b5106081d775033193a3bf8fa260de54391459" exitCode=0 Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.875755 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx82t" event={"ID":"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137","Type":"ContainerDied","Data":"5e299b53bfb4a24c4ae0c44540b5106081d775033193a3bf8fa260de54391459"} Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.898246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.898341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.898393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-config\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.898425 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2dq9\" (UniqueName: \"kubernetes.io/projected/e927c15d-6ca1-4473-a79a-52d223380f18-kube-api-access-h2dq9\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.898466 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.899240 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.899802 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-config\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.899989 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.900577 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:26 crc kubenswrapper[4811]: I0216 21:13:26.924923 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2dq9\" (UniqueName: \"kubernetes.io/projected/e927c15d-6ca1-4473-a79a-52d223380f18-kube-api-access-h2dq9\") pod \"dnsmasq-dns-6ffb94d8ff-pxsxp\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.025606 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-x49kk" Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.048972 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.072803 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.119770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.212602 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4r8qf"] Feb 16 21:13:27 crc kubenswrapper[4811]: W0216 21:13:27.235655 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68df5a8_d13c_4c3c_ac36_b791cb990881.slice/crio-8ad99e8ebeead3850a85e83ac2495ae3c7745b7081585e19b8483de6410280cf WatchSource:0}: Error finding container 8ad99e8ebeead3850a85e83ac2495ae3c7745b7081585e19b8483de6410280cf: Status 404 returned error can't find the container with id 8ad99e8ebeead3850a85e83ac2495ae3c7745b7081585e19b8483de6410280cf Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.401809 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-9grpj"] Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.465481 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qv84d"] Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.511253 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.531721 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-njvjn"] Feb 16 21:13:27 crc kubenswrapper[4811]: W0216 21:13:27.552335 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a07ef56_cd30_4652_9fdd_65279e9b5fb5.slice/crio-4549522a8b1cbdfd96ec2f891af8ec68207622ca88a4431eba19771e41d80e4c WatchSource:0}: Error finding container 4549522a8b1cbdfd96ec2f891af8ec68207622ca88a4431eba19771e41d80e4c: Status 404 returned error can't find the container with id 4549522a8b1cbdfd96ec2f891af8ec68207622ca88a4431eba19771e41d80e4c Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.887382 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qv84d" event={"ID":"6a07ef56-cd30-4652-9fdd-65279e9b5fb5","Type":"ContainerStarted","Data":"4549522a8b1cbdfd96ec2f891af8ec68207622ca88a4431eba19771e41d80e4c"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.891105 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-njvjn" event={"ID":"89a1f359-cb47-470b-ad6e-48d11efacfce","Type":"ContainerStarted","Data":"cf4813fd3d681853193b7e0066dd95991615b849f0c39f442a9491cac2b0e39c"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.892658 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerStarted","Data":"1556def253178d0e96c11d7bb3c14ac5475e77c2cb66c7001cdad596f525f50d"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.907378 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4r8qf" event={"ID":"e68df5a8-d13c-4c3c-ac36-b791cb990881","Type":"ContainerStarted","Data":"8dd8c402b8048ef6a4f3f27495097c1a76f9e7ad1777f0d4c60d692eae2434fd"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.907411 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4r8qf" event={"ID":"e68df5a8-d13c-4c3c-ac36-b791cb990881","Type":"ContainerStarted","Data":"8ad99e8ebeead3850a85e83ac2495ae3c7745b7081585e19b8483de6410280cf"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.909310 4811 generic.go:334] "Generic (PLEG): container finished" podID="771eab31-2aca-4f27-aba3-5dfb9ab8c25c" containerID="9a477de0b014de404d9e6cb9a882bf2bda550241dd85bfab31a0882aa33b358e" exitCode=0 Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.909552 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" event={"ID":"771eab31-2aca-4f27-aba3-5dfb9ab8c25c","Type":"ContainerDied","Data":"9a477de0b014de404d9e6cb9a882bf2bda550241dd85bfab31a0882aa33b358e"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.909584 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" event={"ID":"771eab31-2aca-4f27-aba3-5dfb9ab8c25c","Type":"ContainerStarted","Data":"09f586d8a791f8c40b04695a3ffc39c47e4422e5b48b332618876c21a25fba7e"} Feb 16 21:13:27 crc kubenswrapper[4811]: I0216 21:13:27.927586 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4r8qf" podStartSLOduration=1.9275698650000002 podStartE2EDuration="1.927569865s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:27.924009446 +0000 UTC m=+1025.853305404" watchObservedRunningTime="2026-02-16 21:13:27.927569865 +0000 UTC m=+1025.856865803" Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.000927 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8xm4f"] Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.011180 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-x49kk"] Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.064391 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pxsxp"] Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.166287 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gbrql"] Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.231939 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:13:28 crc kubenswrapper[4811]: W0216 21:13:28.619375 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3237b6a2_9b91_41f2_bcea_21b9f5e91f80.slice/crio-b95cf1ec9c4959cecafb38132608a88fc2135a22495562cbde4ab9b93885780c WatchSource:0}: Error finding container b95cf1ec9c4959cecafb38132608a88fc2135a22495562cbde4ab9b93885780c: Status 404 returned error can't find the container with id b95cf1ec9c4959cecafb38132608a88fc2135a22495562cbde4ab9b93885780c Feb 16 21:13:28 crc kubenswrapper[4811]: E0216 21:13:28.776136 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:13:28 crc kubenswrapper[4811]: E0216 21:13:28.776475 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:13:28 crc kubenswrapper[4811]: E0216 21:13:28.776616 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:13:28 crc kubenswrapper[4811]: E0216 21:13:28.777812 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.922076 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" event={"ID":"771eab31-2aca-4f27-aba3-5dfb9ab8c25c","Type":"ContainerDied","Data":"09f586d8a791f8c40b04695a3ffc39c47e4422e5b48b332618876c21a25fba7e"} Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.922120 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f586d8a791f8c40b04695a3ffc39c47e4422e5b48b332618876c21a25fba7e" Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.923962 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8xm4f" event={"ID":"3518c12b-e37a-4c8d-bbb5-c84f79d45948","Type":"ContainerStarted","Data":"653053a168c0501aeeacdd45108fc99f1095d7c6b2c08c0d28f3eafb53592f4b"} Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.926134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gbrql" event={"ID":"3237b6a2-9b91-41f2-bcea-21b9f5e91f80","Type":"ContainerStarted","Data":"b95cf1ec9c4959cecafb38132608a88fc2135a22495562cbde4ab9b93885780c"} Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.929629 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx82t" event={"ID":"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137","Type":"ContainerDied","Data":"b460f63667a3a1343b952d2cae9f867e1009ea7d8827a497e081099b7d0cc441"} Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.929650 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b460f63667a3a1343b952d2cae9f867e1009ea7d8827a497e081099b7d0cc441" Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.931026 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-x49kk" event={"ID":"46d0afcb-2a14-4e67-89fc-ed848d1637ce","Type":"ContainerStarted","Data":"7f954f7ef2ba8738d4ef97a3d7ae60f802a1db247d01c8786441e91c34a45730"} Feb 16 21:13:28 crc kubenswrapper[4811]: E0216 21:13:28.933880 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.934333 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" event={"ID":"e927c15d-6ca1-4473-a79a-52d223380f18","Type":"ContainerStarted","Data":"c4aa328f1555b42c4d161845752a672dbf1060d33d66592c54a6305827c2fa44"} Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.946647 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:28 crc kubenswrapper[4811]: I0216 21:13:28.993334 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.049349 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-config\") pod \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.049402 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb\") pod \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.049468 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56t4p\" (UniqueName: \"kubernetes.io/projected/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-kube-api-access-56t4p\") pod \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.049525 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-dns-svc\") pod \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.049549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-nb\") pod \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.080554 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-kube-api-access-56t4p" (OuterVolumeSpecName: "kube-api-access-56t4p") pod "771eab31-2aca-4f27-aba3-5dfb9ab8c25c" (UID: "771eab31-2aca-4f27-aba3-5dfb9ab8c25c"). InnerVolumeSpecName "kube-api-access-56t4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.155558 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-combined-ca-bundle\") pod \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.155657 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw84l\" (UniqueName: \"kubernetes.io/projected/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-kube-api-access-xw84l\") pod \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.159648 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-db-sync-config-data\") pod \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.159717 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-config-data\") pod \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\" (UID: \"3a2e3a6d-e105-43a4-bdae-9ef2bde0f137\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.160451 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56t4p\" (UniqueName: \"kubernetes.io/projected/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-kube-api-access-56t4p\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.366755 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" (UID: "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.387839 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "771eab31-2aca-4f27-aba3-5dfb9ab8c25c" (UID: "771eab31-2aca-4f27-aba3-5dfb9ab8c25c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.388406 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-kube-api-access-xw84l" (OuterVolumeSpecName: "kube-api-access-xw84l") pod "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" (UID: "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137"). InnerVolumeSpecName "kube-api-access-xw84l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.397557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-config" (OuterVolumeSpecName: "config") pod "771eab31-2aca-4f27-aba3-5dfb9ab8c25c" (UID: "771eab31-2aca-4f27-aba3-5dfb9ab8c25c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.445311 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" (UID: "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: E0216 21:13:29.446642 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb podName:771eab31-2aca-4f27-aba3-5dfb9ab8c25c nodeName:}" failed. No retries permitted until 2026-02-16 21:13:29.946612138 +0000 UTC m=+1027.875908076 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb") pod "771eab31-2aca-4f27-aba3-5dfb9ab8c25c" (UID: "771eab31-2aca-4f27-aba3-5dfb9ab8c25c") : error deleting /var/lib/kubelet/pods/771eab31-2aca-4f27-aba3-5dfb9ab8c25c/volume-subpaths: remove /var/lib/kubelet/pods/771eab31-2aca-4f27-aba3-5dfb9ab8c25c/volume-subpaths: no such file or directory Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.446832 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "771eab31-2aca-4f27-aba3-5dfb9ab8c25c" (UID: "771eab31-2aca-4f27-aba3-5dfb9ab8c25c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.465692 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.465724 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.465735 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.465748 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw84l\" (UniqueName: \"kubernetes.io/projected/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-kube-api-access-xw84l\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.465757 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.465766 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.499009 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-config-data" (OuterVolumeSpecName: "config-data") pod "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" (UID: "3a2e3a6d-e105-43a4-bdae-9ef2bde0f137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.567593 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.957901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gbrql" event={"ID":"3237b6a2-9b91-41f2-bcea-21b9f5e91f80","Type":"ContainerStarted","Data":"d2f95e4d2897473b77afcffbc43b8d5891a29386bd997ab8c4ab099c55f8191b"} Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.969648 4811 generic.go:334] "Generic (PLEG): container finished" podID="e927c15d-6ca1-4473-a79a-52d223380f18" containerID="9b970b59e69c67d87fe1d5561fd1c5be47895dcfd140504bc8a6c0e727858bb8" exitCode=0 Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.969760 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" event={"ID":"e927c15d-6ca1-4473-a79a-52d223380f18","Type":"ContainerDied","Data":"9b970b59e69c67d87fe1d5561fd1c5be47895dcfd140504bc8a6c0e727858bb8"} Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.977406 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb\") pod \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\" (UID: \"771eab31-2aca-4f27-aba3-5dfb9ab8c25c\") " Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.977881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "771eab31-2aca-4f27-aba3-5dfb9ab8c25c" (UID: "771eab31-2aca-4f27-aba3-5dfb9ab8c25c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.978021 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gbrql" podStartSLOduration=3.978004333 podStartE2EDuration="3.978004333s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:29.972769371 +0000 UTC m=+1027.902065309" watchObservedRunningTime="2026-02-16 21:13:29.978004333 +0000 UTC m=+1027.907300271" Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.981177 4811 generic.go:334] "Generic (PLEG): container finished" podID="e994011a-8ba4-4eed-9c4c-5ddac8b43325" containerID="8acc4470886750c5f5d4a45e4b3a6b962ab17b59aa9148c75689ec08e8d5711e" exitCode=0 Feb 16 21:13:29 crc kubenswrapper[4811]: I0216 21:13:29.981279 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e994011a-8ba4-4eed-9c4c-5ddac8b43325","Type":"ContainerDied","Data":"8acc4470886750c5f5d4a45e4b3a6b962ab17b59aa9148c75689ec08e8d5711e"} Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.026689 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx82t" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.029303 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-9grpj" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.030472 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"4cd868db945f6fffb38b1449d4d7e760a5580017479f3b70d43443bdc8f1dd70"} Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.030511 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"f76728399c058943ad900ef1a986a4ade0657c6b8823c50feac3eb2e412ac662"} Feb 16 21:13:30 crc kubenswrapper[4811]: E0216 21:13:30.036521 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.079595 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771eab31-2aca-4f27-aba3-5dfb9ab8c25c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.141262 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-9grpj"] Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.167652 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-9grpj"] Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.503338 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pxsxp"] Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.528506 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-qgsz5"] Feb 16 21:13:30 crc kubenswrapper[4811]: E0216 21:13:30.540006 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" containerName="glance-db-sync" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.540040 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" containerName="glance-db-sync" Feb 16 21:13:30 crc kubenswrapper[4811]: E0216 21:13:30.540064 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771eab31-2aca-4f27-aba3-5dfb9ab8c25c" containerName="init" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.540070 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="771eab31-2aca-4f27-aba3-5dfb9ab8c25c" containerName="init" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.540308 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" containerName="glance-db-sync" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.540321 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="771eab31-2aca-4f27-aba3-5dfb9ab8c25c" containerName="init" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.541295 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.551602 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-qgsz5"] Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.601592 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-dns-svc\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.601652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.601677 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.601749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-config\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.601790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlc6l\" (UniqueName: \"kubernetes.io/projected/db5aee21-b2c8-4235-b6a8-9bc44960878e-kube-api-access-wlc6l\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.703987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-dns-svc\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.704041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.704061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.704137 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-config\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.704175 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlc6l\" (UniqueName: \"kubernetes.io/projected/db5aee21-b2c8-4235-b6a8-9bc44960878e-kube-api-access-wlc6l\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.704828 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-dns-svc\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.705453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.705791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-config\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.706183 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.725257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlc6l\" (UniqueName: \"kubernetes.io/projected/db5aee21-b2c8-4235-b6a8-9bc44960878e-kube-api-access-wlc6l\") pod \"dnsmasq-dns-56798b757f-qgsz5\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.734588 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771eab31-2aca-4f27-aba3-5dfb9ab8c25c" path="/var/lib/kubelet/pods/771eab31-2aca-4f27-aba3-5dfb9ab8c25c/volumes" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.827185 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 21:13:30 crc kubenswrapper[4811]: I0216 21:13:30.932633 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.058871 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" event={"ID":"e927c15d-6ca1-4473-a79a-52d223380f18","Type":"ContainerStarted","Data":"9d871c600b7e7aa7bef65210cf545f34da6fecca80bb24fa4524aaf757c4d0cd"} Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.060371 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.070373 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e994011a-8ba4-4eed-9c4c-5ddac8b43325","Type":"ContainerStarted","Data":"efee32d878ac95098af5fc8968e22a9d74701a5c690aebb87f0807d92919bb5e"} Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.086734 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" podStartSLOduration=5.086717237 podStartE2EDuration="5.086717237s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:31.080714316 +0000 UTC m=+1029.010010264" watchObservedRunningTime="2026-02-16 21:13:31.086717237 +0000 UTC m=+1029.016013175" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.105501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"05dd814050b6fd6eb9e5928acc2225db7f9d13c69d7b3971ef5e2182a5f01138"} Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.105533 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"6de8a526ae70dfc42f531d23b3891efe39c45ca650c316a19856ee7155b33af8"} Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.105541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"0a7241a72cd21b18ec5a913f9bf12ddf9a85a159ace765e2b395d13d8703c194"} Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.372816 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.374695 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.378413 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.378534 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.397067 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lkdd5" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.409608 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541179 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-config-data\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541481 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541503 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfns\" (UniqueName: \"kubernetes.io/projected/dae37545-788b-495d-b91b-01e7fa6cd250-kube-api-access-8tfns\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-scripts\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541599 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.541617 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-logs\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.570364 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-qgsz5"] Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-config-data\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644671 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfns\" (UniqueName: \"kubernetes.io/projected/dae37545-788b-495d-b91b-01e7fa6cd250-kube-api-access-8tfns\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644817 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-scripts\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.644961 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-logs\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.645464 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-logs\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.649797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.653607 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.653650 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4be36762ec81406c6a6e28b128f06340bf885474247138da4e2187429bf9f1df/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.654495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-scripts\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.654603 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.662043 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-config-data\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.663170 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfns\" (UniqueName: \"kubernetes.io/projected/dae37545-788b-495d-b91b-01e7fa6cd250-kube-api-access-8tfns\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.714227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.744333 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.745991 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.754109 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.763758 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855190 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-logs\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855622 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855875 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9mr\" (UniqueName: \"kubernetes.io/projected/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-kube-api-access-mf9mr\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.855947 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.958381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.958533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-logs\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.958573 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.958607 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.959377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.959709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-logs\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.959750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.960473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9mr\" (UniqueName: \"kubernetes.io/projected/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-kube-api-access-mf9mr\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.960537 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.961297 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.961339 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/869535b9c27ca8a569925eb99ba7bc75347069a54c745c93c6e314aa9f1a2c6c/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.963382 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.964595 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.963343 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:31 crc kubenswrapper[4811]: I0216 21:13:31.982511 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9mr\" (UniqueName: \"kubernetes.io/projected/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-kube-api-access-mf9mr\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.011436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.013661 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.074798 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.126521 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"ac9c9065aa36f4683214e287071449122588864b546615c5a2432d63ed2c1790"} Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.126567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3590443c-c5fd-4eec-a144-06cddd956651","Type":"ContainerStarted","Data":"43882e6d7145554687711870ef0cb5e598fd90e356eab79cff2a6bf8b6e18071"} Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.133665 4811 generic.go:334] "Generic (PLEG): container finished" podID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerID="1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d" exitCode=0 Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.133864 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" containerID="cri-o://9d871c600b7e7aa7bef65210cf545f34da6fecca80bb24fa4524aaf757c4d0cd" gracePeriod=10 Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.133872 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" event={"ID":"db5aee21-b2c8-4235-b6a8-9bc44960878e","Type":"ContainerDied","Data":"1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d"} Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.133908 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" event={"ID":"db5aee21-b2c8-4235-b6a8-9bc44960878e","Type":"ContainerStarted","Data":"57f29436c982fc4c85d6190f76dece458d2cc8df96c774277b0c61919506f863"} Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.172402 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.305043908 podStartE2EDuration="45.1723856s" podCreationTimestamp="2026-02-16 21:12:47 +0000 UTC" firstStartedPulling="2026-02-16 21:13:21.818625586 +0000 UTC m=+1019.747921534" lastFinishedPulling="2026-02-16 21:13:28.685967298 +0000 UTC m=+1026.615263226" observedRunningTime="2026-02-16 21:13:32.166668067 +0000 UTC m=+1030.095964005" watchObservedRunningTime="2026-02-16 21:13:32.1723856 +0000 UTC m=+1030.101681538" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.455278 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-qgsz5"] Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.486464 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ns5q9"] Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.488020 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.494674 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.517166 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ns5q9"] Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.580172 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.580241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-config\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.580261 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.580298 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.580609 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.580841 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnfs\" (UniqueName: \"kubernetes.io/projected/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-kube-api-access-2nnfs\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.683130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnfs\" (UniqueName: \"kubernetes.io/projected/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-kube-api-access-2nnfs\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.683321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.683350 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-config\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.683366 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.683403 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.683474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.684243 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.685041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.687967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.688001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-config\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.688060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.700991 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnfs\" (UniqueName: \"kubernetes.io/projected/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-kube-api-access-2nnfs\") pod \"dnsmasq-dns-56df8fb6b7-ns5q9\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:32 crc kubenswrapper[4811]: I0216 21:13:32.819160 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:33 crc kubenswrapper[4811]: I0216 21:13:33.147378 4811 generic.go:334] "Generic (PLEG): container finished" podID="e68df5a8-d13c-4c3c-ac36-b791cb990881" containerID="8dd8c402b8048ef6a4f3f27495097c1a76f9e7ad1777f0d4c60d692eae2434fd" exitCode=0 Feb 16 21:13:33 crc kubenswrapper[4811]: I0216 21:13:33.147701 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4r8qf" event={"ID":"e68df5a8-d13c-4c3c-ac36-b791cb990881","Type":"ContainerDied","Data":"8dd8c402b8048ef6a4f3f27495097c1a76f9e7ad1777f0d4c60d692eae2434fd"} Feb 16 21:13:33 crc kubenswrapper[4811]: I0216 21:13:33.153897 4811 generic.go:334] "Generic (PLEG): container finished" podID="e927c15d-6ca1-4473-a79a-52d223380f18" containerID="9d871c600b7e7aa7bef65210cf545f34da6fecca80bb24fa4524aaf757c4d0cd" exitCode=0 Feb 16 21:13:33 crc kubenswrapper[4811]: I0216 21:13:33.155509 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" event={"ID":"e927c15d-6ca1-4473-a79a-52d223380f18","Type":"ContainerDied","Data":"9d871c600b7e7aa7bef65210cf545f34da6fecca80bb24fa4524aaf757c4d0cd"} Feb 16 21:13:34 crc kubenswrapper[4811]: I0216 21:13:34.168865 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e994011a-8ba4-4eed-9c4c-5ddac8b43325","Type":"ContainerStarted","Data":"e2cb3ded3c01f5ed585e9307f158b204629f01e5733549f133ef4ff1ec7884e4"} Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.017440 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.142494 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-config-data\") pod \"e68df5a8-d13c-4c3c-ac36-b791cb990881\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.143461 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-scripts\") pod \"e68df5a8-d13c-4c3c-ac36-b791cb990881\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.143622 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-credential-keys\") pod \"e68df5a8-d13c-4c3c-ac36-b791cb990881\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.143659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq9z7\" (UniqueName: \"kubernetes.io/projected/e68df5a8-d13c-4c3c-ac36-b791cb990881-kube-api-access-nq9z7\") pod \"e68df5a8-d13c-4c3c-ac36-b791cb990881\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.143940 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-combined-ca-bundle\") pod \"e68df5a8-d13c-4c3c-ac36-b791cb990881\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.143992 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-fernet-keys\") pod \"e68df5a8-d13c-4c3c-ac36-b791cb990881\" (UID: \"e68df5a8-d13c-4c3c-ac36-b791cb990881\") " Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.158521 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68df5a8-d13c-4c3c-ac36-b791cb990881-kube-api-access-nq9z7" (OuterVolumeSpecName: "kube-api-access-nq9z7") pod "e68df5a8-d13c-4c3c-ac36-b791cb990881" (UID: "e68df5a8-d13c-4c3c-ac36-b791cb990881"). InnerVolumeSpecName "kube-api-access-nq9z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.159244 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-scripts" (OuterVolumeSpecName: "scripts") pod "e68df5a8-d13c-4c3c-ac36-b791cb990881" (UID: "e68df5a8-d13c-4c3c-ac36-b791cb990881"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.160702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e68df5a8-d13c-4c3c-ac36-b791cb990881" (UID: "e68df5a8-d13c-4c3c-ac36-b791cb990881"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.195882 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e68df5a8-d13c-4c3c-ac36-b791cb990881" (UID: "e68df5a8-d13c-4c3c-ac36-b791cb990881"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.199100 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-config-data" (OuterVolumeSpecName: "config-data") pod "e68df5a8-d13c-4c3c-ac36-b791cb990881" (UID: "e68df5a8-d13c-4c3c-ac36-b791cb990881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.201576 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4r8qf" event={"ID":"e68df5a8-d13c-4c3c-ac36-b791cb990881","Type":"ContainerDied","Data":"8ad99e8ebeead3850a85e83ac2495ae3c7745b7081585e19b8483de6410280cf"} Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.201813 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad99e8ebeead3850a85e83ac2495ae3c7745b7081585e19b8483de6410280cf" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.202027 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4r8qf" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.219524 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e68df5a8-d13c-4c3c-ac36-b791cb990881" (UID: "e68df5a8-d13c-4c3c-ac36-b791cb990881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.246391 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.246671 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.246756 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.246834 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.246912 4811 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68df5a8-d13c-4c3c-ac36-b791cb990881-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.247000 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq9z7\" (UniqueName: \"kubernetes.io/projected/e68df5a8-d13c-4c3c-ac36-b791cb990881-kube-api-access-nq9z7\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.303160 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4r8qf"] Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.311987 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4r8qf"] Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.432310 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l6k27"] Feb 16 21:13:35 crc kubenswrapper[4811]: E0216 21:13:35.432816 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68df5a8-d13c-4c3c-ac36-b791cb990881" containerName="keystone-bootstrap" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.432835 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68df5a8-d13c-4c3c-ac36-b791cb990881" containerName="keystone-bootstrap" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.433096 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68df5a8-d13c-4c3c-ac36-b791cb990881" containerName="keystone-bootstrap" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.434211 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.450326 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l6k27"] Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.554145 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-fernet-keys\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.554540 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-credential-keys\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.554629 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-scripts\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.554687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-combined-ca-bundle\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.554706 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-config-data\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.554731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjfq\" (UniqueName: \"kubernetes.io/projected/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-kube-api-access-cxjfq\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.596885 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.656881 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-fernet-keys\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.656957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-credential-keys\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.657018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-scripts\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.657056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-combined-ca-bundle\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.657074 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-config-data\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.657096 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxjfq\" (UniqueName: \"kubernetes.io/projected/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-kube-api-access-cxjfq\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.663946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-scripts\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.665014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-fernet-keys\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.665743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-credential-keys\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.666104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-config-data\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.674783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxjfq\" (UniqueName: \"kubernetes.io/projected/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-kube-api-access-cxjfq\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.682601 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-combined-ca-bundle\") pod \"keystone-bootstrap-l6k27\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:35 crc kubenswrapper[4811]: I0216 21:13:35.768590 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:36 crc kubenswrapper[4811]: I0216 21:13:36.172449 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:36 crc kubenswrapper[4811]: I0216 21:13:36.265350 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:36 crc kubenswrapper[4811]: I0216 21:13:36.715790 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68df5a8-d13c-4c3c-ac36-b791cb990881" path="/var/lib/kubelet/pods/e68df5a8-d13c-4c3c-ac36-b791cb990881/volumes" Feb 16 21:13:37 crc kubenswrapper[4811]: I0216 21:13:37.121890 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Feb 16 21:13:41 crc kubenswrapper[4811]: I0216 21:13:41.273721 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dae37545-788b-495d-b91b-01e7fa6cd250","Type":"ContainerStarted","Data":"f3b9744013fb79ead5790418de73f586f05ca45e93acf92fd7bcf02a7cb5afb0"} Feb 16 21:13:41 crc kubenswrapper[4811]: I0216 21:13:41.301012 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:44 crc kubenswrapper[4811]: E0216 21:13:44.826998 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:13:44 crc kubenswrapper[4811]: E0216 21:13:44.827565 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:13:44 crc kubenswrapper[4811]: E0216 21:13:44.827681 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:13:44 crc kubenswrapper[4811]: E0216 21:13:44.829044 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:13:47 crc kubenswrapper[4811]: I0216 21:13:47.120755 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Feb 16 21:13:48 crc kubenswrapper[4811]: E0216 21:13:48.528322 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 21:13:48 crc kubenswrapper[4811]: E0216 21:13:48.528747 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5d5t7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-njvjn_openstack(89a1f359-cb47-470b-ad6e-48d11efacfce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:48 crc kubenswrapper[4811]: E0216 21:13:48.530078 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-njvjn" podUID="89a1f359-cb47-470b-ad6e-48d11efacfce" Feb 16 21:13:48 crc kubenswrapper[4811]: W0216 21:13:48.567442 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edfbd65_bed1_4d1b_86ae_d7fefa3e4432.slice/crio-b8ae4e6b85f4c5c671ad04275d3599380dffc390da4d58161c35f99a2a22be86 WatchSource:0}: Error finding container b8ae4e6b85f4c5c671ad04275d3599380dffc390da4d58161c35f99a2a22be86: Status 404 returned error can't find the container with id b8ae4e6b85f4c5c671ad04275d3599380dffc390da4d58161c35f99a2a22be86 Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.750770 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.800569 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2dq9\" (UniqueName: \"kubernetes.io/projected/e927c15d-6ca1-4473-a79a-52d223380f18-kube-api-access-h2dq9\") pod \"e927c15d-6ca1-4473-a79a-52d223380f18\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.800658 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-config\") pod \"e927c15d-6ca1-4473-a79a-52d223380f18\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.800689 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-dns-svc\") pod \"e927c15d-6ca1-4473-a79a-52d223380f18\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.800753 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-sb\") pod \"e927c15d-6ca1-4473-a79a-52d223380f18\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.800811 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-nb\") pod \"e927c15d-6ca1-4473-a79a-52d223380f18\" (UID: \"e927c15d-6ca1-4473-a79a-52d223380f18\") " Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.812701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e927c15d-6ca1-4473-a79a-52d223380f18-kube-api-access-h2dq9" (OuterVolumeSpecName: "kube-api-access-h2dq9") pod "e927c15d-6ca1-4473-a79a-52d223380f18" (UID: "e927c15d-6ca1-4473-a79a-52d223380f18"). InnerVolumeSpecName "kube-api-access-h2dq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.894704 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e927c15d-6ca1-4473-a79a-52d223380f18" (UID: "e927c15d-6ca1-4473-a79a-52d223380f18"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.894970 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e927c15d-6ca1-4473-a79a-52d223380f18" (UID: "e927c15d-6ca1-4473-a79a-52d223380f18"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.895148 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-config" (OuterVolumeSpecName: "config") pod "e927c15d-6ca1-4473-a79a-52d223380f18" (UID: "e927c15d-6ca1-4473-a79a-52d223380f18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.897678 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e927c15d-6ca1-4473-a79a-52d223380f18" (UID: "e927c15d-6ca1-4473-a79a-52d223380f18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.902614 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.902649 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.902663 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.902673 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e927c15d-6ca1-4473-a79a-52d223380f18-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:48 crc kubenswrapper[4811]: I0216 21:13:48.902682 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2dq9\" (UniqueName: \"kubernetes.io/projected/e927c15d-6ca1-4473-a79a-52d223380f18-kube-api-access-h2dq9\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.027922 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ns5q9"] Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.388972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" event={"ID":"e927c15d-6ca1-4473-a79a-52d223380f18","Type":"ContainerDied","Data":"c4aa328f1555b42c4d161845752a672dbf1060d33d66592c54a6305827c2fa44"} Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.389043 4811 scope.go:117] "RemoveContainer" containerID="9d871c600b7e7aa7bef65210cf545f34da6fecca80bb24fa4524aaf757c4d0cd" Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.389067 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.393775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e994011a-8ba4-4eed-9c4c-5ddac8b43325","Type":"ContainerStarted","Data":"2bd9695fa6a0409fff4259f59b65f3c8afb953f9eed68b4d6ac85cefec8664aa"} Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.397240 4811 generic.go:334] "Generic (PLEG): container finished" podID="3237b6a2-9b91-41f2-bcea-21b9f5e91f80" containerID="d2f95e4d2897473b77afcffbc43b8d5891a29386bd997ab8c4ab099c55f8191b" exitCode=0 Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.397327 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gbrql" event={"ID":"3237b6a2-9b91-41f2-bcea-21b9f5e91f80","Type":"ContainerDied","Data":"d2f95e4d2897473b77afcffbc43b8d5891a29386bd997ab8c4ab099c55f8191b"} Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.399596 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432","Type":"ContainerStarted","Data":"b8ae4e6b85f4c5c671ad04275d3599380dffc390da4d58161c35f99a2a22be86"} Feb 16 21:13:49 crc kubenswrapper[4811]: E0216 21:13:49.400979 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-njvjn" podUID="89a1f359-cb47-470b-ad6e-48d11efacfce" Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.440993 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=30.440965132 podStartE2EDuration="30.440965132s" podCreationTimestamp="2026-02-16 21:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:49.421563896 +0000 UTC m=+1047.350859854" watchObservedRunningTime="2026-02-16 21:13:49.440965132 +0000 UTC m=+1047.370261090" Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.455942 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pxsxp"] Feb 16 21:13:49 crc kubenswrapper[4811]: I0216 21:13:49.465290 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-pxsxp"] Feb 16 21:13:50 crc kubenswrapper[4811]: W0216 21:13:50.036657 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0927a3bc_88fd_46ce_b4e7_43ed2a083e2c.slice/crio-0a912e3d3c58243e733cf6f6f76d6ffc2acf82a345e5682ec6b8d94171c95009 WatchSource:0}: Error finding container 0a912e3d3c58243e733cf6f6f76d6ffc2acf82a345e5682ec6b8d94171c95009: Status 404 returned error can't find the container with id 0a912e3d3c58243e733cf6f6f76d6ffc2acf82a345e5682ec6b8d94171c95009 Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.072150 4811 scope.go:117] "RemoveContainer" containerID="9b970b59e69c67d87fe1d5561fd1c5be47895dcfd140504bc8a6c0e727858bb8" Feb 16 21:13:50 crc kubenswrapper[4811]: E0216 21:13:50.080324 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 21:13:50 crc kubenswrapper[4811]: E0216 21:13:50.080464 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882fb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qv84d_openstack(6a07ef56-cd30-4652-9fdd-65279e9b5fb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:50 crc kubenswrapper[4811]: E0216 21:13:50.081648 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qv84d" podUID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.199115 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.199772 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.203900 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.412870 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" event={"ID":"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c","Type":"ContainerStarted","Data":"0a912e3d3c58243e733cf6f6f76d6ffc2acf82a345e5682ec6b8d94171c95009"} Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.419280 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerName="dnsmasq-dns" containerID="cri-o://7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada" gracePeriod=10 Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.419340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" event={"ID":"db5aee21-b2c8-4235-b6a8-9bc44960878e","Type":"ContainerStarted","Data":"7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada"} Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.420479 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:50 crc kubenswrapper[4811]: E0216 21:13:50.423541 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qv84d" podUID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.425684 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.470714 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" podStartSLOduration=20.470697357 podStartE2EDuration="20.470697357s" podCreationTimestamp="2026-02-16 21:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:50.449524776 +0000 UTC m=+1048.378820714" watchObservedRunningTime="2026-02-16 21:13:50.470697357 +0000 UTC m=+1048.399993295" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.551665 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l6k27"] Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.720105 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" path="/var/lib/kubelet/pods/e927c15d-6ca1-4473-a79a-52d223380f18/volumes" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.880147 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.897620 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-config\") pod \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.897804 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcb2p\" (UniqueName: \"kubernetes.io/projected/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-kube-api-access-gcb2p\") pod \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.897929 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-combined-ca-bundle\") pod \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\" (UID: \"3237b6a2-9b91-41f2-bcea-21b9f5e91f80\") " Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.908677 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-kube-api-access-gcb2p" (OuterVolumeSpecName: "kube-api-access-gcb2p") pod "3237b6a2-9b91-41f2-bcea-21b9f5e91f80" (UID: "3237b6a2-9b91-41f2-bcea-21b9f5e91f80"). InnerVolumeSpecName "kube-api-access-gcb2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.945658 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-config" (OuterVolumeSpecName: "config") pod "3237b6a2-9b91-41f2-bcea-21b9f5e91f80" (UID: "3237b6a2-9b91-41f2-bcea-21b9f5e91f80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.948592 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3237b6a2-9b91-41f2-bcea-21b9f5e91f80" (UID: "3237b6a2-9b91-41f2-bcea-21b9f5e91f80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:50 crc kubenswrapper[4811]: I0216 21:13:50.984462 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003372 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-sb\") pod \"db5aee21-b2c8-4235-b6a8-9bc44960878e\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003411 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-nb\") pod \"db5aee21-b2c8-4235-b6a8-9bc44960878e\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003472 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-config\") pod \"db5aee21-b2c8-4235-b6a8-9bc44960878e\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlc6l\" (UniqueName: \"kubernetes.io/projected/db5aee21-b2c8-4235-b6a8-9bc44960878e-kube-api-access-wlc6l\") pod \"db5aee21-b2c8-4235-b6a8-9bc44960878e\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003586 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-dns-svc\") pod \"db5aee21-b2c8-4235-b6a8-9bc44960878e\" (UID: \"db5aee21-b2c8-4235-b6a8-9bc44960878e\") " Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003979 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcb2p\" (UniqueName: \"kubernetes.io/projected/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-kube-api-access-gcb2p\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.003995 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.004003 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3237b6a2-9b91-41f2-bcea-21b9f5e91f80-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.023407 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5aee21-b2c8-4235-b6a8-9bc44960878e-kube-api-access-wlc6l" (OuterVolumeSpecName: "kube-api-access-wlc6l") pod "db5aee21-b2c8-4235-b6a8-9bc44960878e" (UID: "db5aee21-b2c8-4235-b6a8-9bc44960878e"). InnerVolumeSpecName "kube-api-access-wlc6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.075252 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db5aee21-b2c8-4235-b6a8-9bc44960878e" (UID: "db5aee21-b2c8-4235-b6a8-9bc44960878e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.077184 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db5aee21-b2c8-4235-b6a8-9bc44960878e" (UID: "db5aee21-b2c8-4235-b6a8-9bc44960878e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.102907 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-config" (OuterVolumeSpecName: "config") pod "db5aee21-b2c8-4235-b6a8-9bc44960878e" (UID: "db5aee21-b2c8-4235-b6a8-9bc44960878e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.105723 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.105749 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.105760 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlc6l\" (UniqueName: \"kubernetes.io/projected/db5aee21-b2c8-4235-b6a8-9bc44960878e-kube-api-access-wlc6l\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.105769 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.162728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db5aee21-b2c8-4235-b6a8-9bc44960878e" (UID: "db5aee21-b2c8-4235-b6a8-9bc44960878e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.208043 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5aee21-b2c8-4235-b6a8-9bc44960878e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.443488 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gbrql" event={"ID":"3237b6a2-9b91-41f2-bcea-21b9f5e91f80","Type":"ContainerDied","Data":"b95cf1ec9c4959cecafb38132608a88fc2135a22495562cbde4ab9b93885780c"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.443749 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95cf1ec9c4959cecafb38132608a88fc2135a22495562cbde4ab9b93885780c" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.443536 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gbrql" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.448362 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432","Type":"ContainerStarted","Data":"8958ac521317f11ccae29dbd21a8345e663055edfbc680f2afb24015f44250e1"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.450366 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6k27" event={"ID":"0da681f8-0bc1-49c1-b1ae-82ec13f671e1","Type":"ContainerStarted","Data":"2a3b650f2567dc451027a9ee4e884c29086a6c198341b0ff1aad8cab2b7a538c"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.450392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6k27" event={"ID":"0da681f8-0bc1-49c1-b1ae-82ec13f671e1","Type":"ContainerStarted","Data":"10c15382679c93e2bbd387e5007beec44608748aa8bf762de7afba6c5dc464a4"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.475460 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerStarted","Data":"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.499395 4811 generic.go:334] "Generic (PLEG): container finished" podID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerID="5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9" exitCode=0 Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.499507 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" event={"ID":"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c","Type":"ContainerDied","Data":"5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.506086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dae37545-788b-495d-b91b-01e7fa6cd250","Type":"ContainerStarted","Data":"0b59471480a149e0e015b58e9115a0f4ac93daa40d92e24f3cea91d809067c53"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.511811 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l6k27" podStartSLOduration=16.511792504 podStartE2EDuration="16.511792504s" podCreationTimestamp="2026-02-16 21:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:51.476280887 +0000 UTC m=+1049.405576835" watchObservedRunningTime="2026-02-16 21:13:51.511792504 +0000 UTC m=+1049.441088442" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.516334 4811 generic.go:334] "Generic (PLEG): container finished" podID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerID="7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada" exitCode=0 Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.516425 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" event={"ID":"db5aee21-b2c8-4235-b6a8-9bc44960878e","Type":"ContainerDied","Data":"7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.516453 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" event={"ID":"db5aee21-b2c8-4235-b6a8-9bc44960878e","Type":"ContainerDied","Data":"57f29436c982fc4c85d6190f76dece458d2cc8df96c774277b0c61919506f863"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.516469 4811 scope.go:117] "RemoveContainer" containerID="7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.516928 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-qgsz5" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.548664 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8xm4f" event={"ID":"3518c12b-e37a-4c8d-bbb5-c84f79d45948","Type":"ContainerStarted","Data":"bccd5a55b686da93c11d141cf741a0cc651c398511c01c45b590ed41ad9def97"} Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.611930 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8xm4f" podStartSLOduration=4.139882537 podStartE2EDuration="25.611913402s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="2026-02-16 21:13:28.619302861 +0000 UTC m=+1026.548598799" lastFinishedPulling="2026-02-16 21:13:50.091333716 +0000 UTC m=+1048.020629664" observedRunningTime="2026-02-16 21:13:51.581070494 +0000 UTC m=+1049.510366432" watchObservedRunningTime="2026-02-16 21:13:51.611913402 +0000 UTC m=+1049.541209330" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.616835 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-qgsz5"] Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.657775 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-qgsz5"] Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.681587 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ns5q9"] Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.704873 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4d766"] Feb 16 21:13:51 crc kubenswrapper[4811]: E0216 21:13:51.705488 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3237b6a2-9b91-41f2-bcea-21b9f5e91f80" containerName="neutron-db-sync" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.705506 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3237b6a2-9b91-41f2-bcea-21b9f5e91f80" containerName="neutron-db-sync" Feb 16 21:13:51 crc kubenswrapper[4811]: E0216 21:13:51.705557 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerName="init" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.705564 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerName="init" Feb 16 21:13:51 crc kubenswrapper[4811]: E0216 21:13:51.705579 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerName="dnsmasq-dns" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.705586 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerName="dnsmasq-dns" Feb 16 21:13:51 crc kubenswrapper[4811]: E0216 21:13:51.705600 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.705636 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" Feb 16 21:13:51 crc kubenswrapper[4811]: E0216 21:13:51.705646 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="init" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.705652 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="init" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.706517 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.706571 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3237b6a2-9b91-41f2-bcea-21b9f5e91f80" containerName="neutron-db-sync" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.706588 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" containerName="dnsmasq-dns" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.711281 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.725835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.725954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-config\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.725990 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.726036 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfvk\" (UniqueName: \"kubernetes.io/projected/405249e2-47a2-46d7-b5db-4bfb1ce2c477-kube-api-access-4pfvk\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.726062 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.726133 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-svc\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.735389 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4d766"] Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.789311 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-99ff95c78-p6wd9"] Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.793905 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.798213 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.798467 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lvcs7" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.798471 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.798764 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.808141 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-99ff95c78-p6wd9"] Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827212 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-config\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-httpd-config\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827317 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pfvk\" (UniqueName: \"kubernetes.io/projected/405249e2-47a2-46d7-b5db-4bfb1ce2c477-kube-api-access-4pfvk\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827362 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-svc\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-combined-ca-bundle\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-config\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827551 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9nf7\" (UniqueName: \"kubernetes.io/projected/0b61fef6-46f1-4197-9eef-c6fa330e5fef-kube-api-access-s9nf7\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.827575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-ovndb-tls-certs\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.828526 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-config\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.829503 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.829542 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.830013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.830515 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-svc\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.853814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pfvk\" (UniqueName: \"kubernetes.io/projected/405249e2-47a2-46d7-b5db-4bfb1ce2c477-kube-api-access-4pfvk\") pod \"dnsmasq-dns-6b7b667979-4d766\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.930435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-combined-ca-bundle\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.930478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-config\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.930519 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9nf7\" (UniqueName: \"kubernetes.io/projected/0b61fef6-46f1-4197-9eef-c6fa330e5fef-kube-api-access-s9nf7\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.930546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-ovndb-tls-certs\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.930612 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-httpd-config\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.937508 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-httpd-config\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.939846 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-config\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.944676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-combined-ca-bundle\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.946472 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-ovndb-tls-certs\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:51 crc kubenswrapper[4811]: I0216 21:13:51.951245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9nf7\" (UniqueName: \"kubernetes.io/projected/0b61fef6-46f1-4197-9eef-c6fa330e5fef-kube-api-access-s9nf7\") pod \"neutron-99ff95c78-p6wd9\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.041611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.121971 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-pxsxp" podUID="e927c15d-6ca1-4473-a79a-52d223380f18" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.135788 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.550061 4811 scope.go:117] "RemoveContainer" containerID="1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.568671 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-log" containerID="cri-o://0b59471480a149e0e015b58e9115a0f4ac93daa40d92e24f3cea91d809067c53" gracePeriod=30 Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.568754 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-httpd" containerID="cri-o://68dd7af53e7cd3992564eb482e37d28ad75a5f24434c844c2e46419c47202659" gracePeriod=30 Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.568771 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dae37545-788b-495d-b91b-01e7fa6cd250","Type":"ContainerStarted","Data":"68dd7af53e7cd3992564eb482e37d28ad75a5f24434c844c2e46419c47202659"} Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.584870 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-log" containerID="cri-o://8958ac521317f11ccae29dbd21a8345e663055edfbc680f2afb24015f44250e1" gracePeriod=30 Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.585099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432","Type":"ContainerStarted","Data":"bc903609e7754f8103e3e4fcb3981c96e7871ecbb3117d9e5ccd56fd1ac91ad6"} Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.585534 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-httpd" containerID="cri-o://bc903609e7754f8103e3e4fcb3981c96e7871ecbb3117d9e5ccd56fd1ac91ad6" gracePeriod=30 Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.614875 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.614854674 podStartE2EDuration="22.614854674s" podCreationTimestamp="2026-02-16 21:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:52.599594694 +0000 UTC m=+1050.528890632" watchObservedRunningTime="2026-02-16 21:13:52.614854674 +0000 UTC m=+1050.544150612" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.655494 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.655476532 podStartE2EDuration="22.655476532s" podCreationTimestamp="2026-02-16 21:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:52.627435225 +0000 UTC m=+1050.556731163" watchObservedRunningTime="2026-02-16 21:13:52.655476532 +0000 UTC m=+1050.584772470" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.742674 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5aee21-b2c8-4235-b6a8-9bc44960878e" path="/var/lib/kubelet/pods/db5aee21-b2c8-4235-b6a8-9bc44960878e/volumes" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.926426 4811 scope.go:117] "RemoveContainer" containerID="7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada" Feb 16 21:13:52 crc kubenswrapper[4811]: E0216 21:13:52.927371 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada\": container with ID starting with 7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada not found: ID does not exist" containerID="7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.927436 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada"} err="failed to get container status \"7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada\": rpc error: code = NotFound desc = could not find container \"7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada\": container with ID starting with 7edaad3d1fd90d603e2204d1d43193b3e62afcddd7262acda230fa75b7253ada not found: ID does not exist" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.927462 4811 scope.go:117] "RemoveContainer" containerID="1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d" Feb 16 21:13:52 crc kubenswrapper[4811]: E0216 21:13:52.927683 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d\": container with ID starting with 1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d not found: ID does not exist" containerID="1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d" Feb 16 21:13:52 crc kubenswrapper[4811]: I0216 21:13:52.927698 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d"} err="failed to get container status \"1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d\": rpc error: code = NotFound desc = could not find container \"1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d\": container with ID starting with 1a26f746efb586c1cd39a15932ce1c1db7da1f3cceaffa000f3eccaaf7831d8d not found: ID does not exist" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.324732 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4d766"] Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.492688 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-99ff95c78-p6wd9"] Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.605803 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" event={"ID":"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c","Type":"ContainerStarted","Data":"44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.606070 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerName="dnsmasq-dns" containerID="cri-o://44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d" gracePeriod=10 Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.606425 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.617148 4811 generic.go:334] "Generic (PLEG): container finished" podID="dae37545-788b-495d-b91b-01e7fa6cd250" containerID="68dd7af53e7cd3992564eb482e37d28ad75a5f24434c844c2e46419c47202659" exitCode=0 Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.617183 4811 generic.go:334] "Generic (PLEG): container finished" podID="dae37545-788b-495d-b91b-01e7fa6cd250" containerID="0b59471480a149e0e015b58e9115a0f4ac93daa40d92e24f3cea91d809067c53" exitCode=143 Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.617224 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dae37545-788b-495d-b91b-01e7fa6cd250","Type":"ContainerDied","Data":"68dd7af53e7cd3992564eb482e37d28ad75a5f24434c844c2e46419c47202659"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.617269 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dae37545-788b-495d-b91b-01e7fa6cd250","Type":"ContainerDied","Data":"0b59471480a149e0e015b58e9115a0f4ac93daa40d92e24f3cea91d809067c53"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.631477 4811 generic.go:334] "Generic (PLEG): container finished" podID="3518c12b-e37a-4c8d-bbb5-c84f79d45948" containerID="bccd5a55b686da93c11d141cf741a0cc651c398511c01c45b590ed41ad9def97" exitCode=0 Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.631620 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8xm4f" event={"ID":"3518c12b-e37a-4c8d-bbb5-c84f79d45948","Type":"ContainerDied","Data":"bccd5a55b686da93c11d141cf741a0cc651c398511c01c45b590ed41ad9def97"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.636289 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" podStartSLOduration=21.636271217 podStartE2EDuration="21.636271217s" podCreationTimestamp="2026-02-16 21:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:53.629694899 +0000 UTC m=+1051.558990837" watchObservedRunningTime="2026-02-16 21:13:53.636271217 +0000 UTC m=+1051.565567155" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.636873 4811 generic.go:334] "Generic (PLEG): container finished" podID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerID="bc903609e7754f8103e3e4fcb3981c96e7871ecbb3117d9e5ccd56fd1ac91ad6" exitCode=0 Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.636896 4811 generic.go:334] "Generic (PLEG): container finished" podID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerID="8958ac521317f11ccae29dbd21a8345e663055edfbc680f2afb24015f44250e1" exitCode=143 Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.636960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432","Type":"ContainerDied","Data":"bc903609e7754f8103e3e4fcb3981c96e7871ecbb3117d9e5ccd56fd1ac91ad6"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.637004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432","Type":"ContainerDied","Data":"8958ac521317f11ccae29dbd21a8345e663055edfbc680f2afb24015f44250e1"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.652753 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99ff95c78-p6wd9" event={"ID":"0b61fef6-46f1-4197-9eef-c6fa330e5fef","Type":"ContainerStarted","Data":"05396d6012dd37b3b89acdc718f9e716c0c577025fc1168609f6564ecaa143d9"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.665638 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerStarted","Data":"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.694456 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4d766" event={"ID":"405249e2-47a2-46d7-b5db-4bfb1ce2c477","Type":"ContainerStarted","Data":"1e2f41c6beedee19ba0a367dabad552a209fb6818dcbbcaef9f31ebf44ab1f94"} Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.742397 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.815445 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.846474 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b9874789f-2tq4q"] Feb 16 21:13:53 crc kubenswrapper[4811]: E0216 21:13:53.846937 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-log" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.846955 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-log" Feb 16 21:13:53 crc kubenswrapper[4811]: E0216 21:13:53.846988 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-httpd" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.846995 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-httpd" Feb 16 21:13:53 crc kubenswrapper[4811]: E0216 21:13:53.847012 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-log" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.847019 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-log" Feb 16 21:13:53 crc kubenswrapper[4811]: E0216 21:13:53.847034 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-httpd" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.847041 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-httpd" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.847212 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-log" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.847227 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" containerName="glance-httpd" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.847244 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-httpd" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.847259 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" containerName="glance-log" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.848241 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.853309 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.853679 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.882138 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b9874789f-2tq4q"] Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-config-data\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887375 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf9mr\" (UniqueName: \"kubernetes.io/projected/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-kube-api-access-mf9mr\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887473 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-logs\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887540 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-combined-ca-bundle\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887681 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887754 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-scripts\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.887811 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-httpd-run\") pod \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\" (UID: \"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.889072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-logs" (OuterVolumeSpecName: "logs") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.889579 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.898457 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-kube-api-access-mf9mr" (OuterVolumeSpecName: "kube-api-access-mf9mr") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "kube-api-access-mf9mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.901151 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-scripts" (OuterVolumeSpecName: "scripts") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.933115 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e" (OuterVolumeSpecName: "glance") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.934545 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.957286 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-config-data" (OuterVolumeSpecName: "config-data") pod "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" (UID: "1edfbd65-bed1-4d1b-86ae-d7fefa3e4432"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.990255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.991609 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-httpd-run\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992227 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992264 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-combined-ca-bundle\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992325 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-config-data\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992398 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-logs\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992439 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tfns\" (UniqueName: \"kubernetes.io/projected/dae37545-788b-495d-b91b-01e7fa6cd250-kube-api-access-8tfns\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992464 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-scripts\") pod \"dae37545-788b-495d-b91b-01e7fa6cd250\" (UID: \"dae37545-788b-495d-b91b-01e7fa6cd250\") " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992664 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-internal-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-config\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992803 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-combined-ca-bundle\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9xc\" (UniqueName: \"kubernetes.io/projected/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-kube-api-access-cw9xc\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992911 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-public-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.992934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-httpd-config\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.993604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-logs" (OuterVolumeSpecName: "logs") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.993634 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-ovndb-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994049 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf9mr\" (UniqueName: \"kubernetes.io/projected/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-kube-api-access-mf9mr\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994075 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994087 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994097 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994126 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") on node \"crc\" " Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994142 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994155 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994168 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae37545-788b-495d-b91b-01e7fa6cd250-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:53 crc kubenswrapper[4811]: I0216 21:13:53.994179 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.018372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-scripts" (OuterVolumeSpecName: "scripts") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.021496 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae37545-788b-495d-b91b-01e7fa6cd250-kube-api-access-8tfns" (OuterVolumeSpecName: "kube-api-access-8tfns") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "kube-api-access-8tfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.032839 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61" (OuterVolumeSpecName: "glance") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "pvc-953416f2-8442-4b16-a122-58a357229e61". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.036057 4811 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.036208 4811 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e") on node "crc" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.054190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095592 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-ovndb-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-internal-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095682 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-config\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-combined-ca-bundle\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095782 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9xc\" (UniqueName: \"kubernetes.io/projected/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-kube-api-access-cw9xc\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095830 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-public-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095851 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-httpd-config\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095901 4811 reconciler_common.go:293] "Volume detached for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095917 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095927 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tfns\" (UniqueName: \"kubernetes.io/projected/dae37545-788b-495d-b91b-01e7fa6cd250-kube-api-access-8tfns\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095936 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.095956 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") on node \"crc\" " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.104175 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-config\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.106211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-combined-ca-bundle\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.106560 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-public-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.110846 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-internal-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.122904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-ovndb-tls-certs\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.125454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9xc\" (UniqueName: \"kubernetes.io/projected/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-kube-api-access-cw9xc\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.147115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-httpd-config\") pod \"neutron-5b9874789f-2tq4q\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.147292 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-config-data" (OuterVolumeSpecName: "config-data") pod "dae37545-788b-495d-b91b-01e7fa6cd250" (UID: "dae37545-788b-495d-b91b-01e7fa6cd250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.170741 4811 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.170970 4811 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-953416f2-8442-4b16-a122-58a357229e61" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61") on node "crc" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.190594 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.198652 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae37545-788b-495d-b91b-01e7fa6cd250-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.198684 4811 reconciler_common.go:293] "Volume detached for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.328399 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.503394 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nnfs\" (UniqueName: \"kubernetes.io/projected/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-kube-api-access-2nnfs\") pod \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.503716 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-svc\") pod \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.503755 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-config\") pod \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.503793 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-swift-storage-0\") pod \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.503838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-nb\") pod \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.503961 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-sb\") pod \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\" (UID: \"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c\") " Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.511342 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-kube-api-access-2nnfs" (OuterVolumeSpecName: "kube-api-access-2nnfs") pod "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" (UID: "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c"). InnerVolumeSpecName "kube-api-access-2nnfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.586716 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" (UID: "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.594126 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" (UID: "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.599306 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" (UID: "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.606148 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.606525 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.606547 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nnfs\" (UniqueName: \"kubernetes.io/projected/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-kube-api-access-2nnfs\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.606557 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.608183 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-config" (OuterVolumeSpecName: "config") pod "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" (UID: "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.630521 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" (UID: "0927a3bc-88fd-46ce-b4e7-43ed2a083e2c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.707894 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.708078 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.716909 4811 generic.go:334] "Generic (PLEG): container finished" podID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerID="c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb" exitCode=0 Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720082 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4d766" event={"ID":"405249e2-47a2-46d7-b5db-4bfb1ce2c477","Type":"ContainerStarted","Data":"5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720140 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720176 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4d766" event={"ID":"405249e2-47a2-46d7-b5db-4bfb1ce2c477","Type":"ContainerDied","Data":"c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720580 4811 generic.go:334] "Generic (PLEG): container finished" podID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerID="44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d" exitCode=0 Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720648 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" event={"ID":"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c","Type":"ContainerDied","Data":"44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720679 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" event={"ID":"0927a3bc-88fd-46ce-b4e7-43ed2a083e2c","Type":"ContainerDied","Data":"0a912e3d3c58243e733cf6f6f76d6ffc2acf82a345e5682ec6b8d94171c95009"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720698 4811 scope.go:117] "RemoveContainer" containerID="44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.720804 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ns5q9" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.726774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dae37545-788b-495d-b91b-01e7fa6cd250","Type":"ContainerDied","Data":"f3b9744013fb79ead5790418de73f586f05ca45e93acf92fd7bcf02a7cb5afb0"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.726793 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.730565 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1edfbd65-bed1-4d1b-86ae-d7fefa3e4432","Type":"ContainerDied","Data":"b8ae4e6b85f4c5c671ad04275d3599380dffc390da4d58161c35f99a2a22be86"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.730701 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.735942 4811 generic.go:334] "Generic (PLEG): container finished" podID="0da681f8-0bc1-49c1-b1ae-82ec13f671e1" containerID="2a3b650f2567dc451027a9ee4e884c29086a6c198341b0ff1aad8cab2b7a538c" exitCode=0 Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.736024 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6k27" event={"ID":"0da681f8-0bc1-49c1-b1ae-82ec13f671e1","Type":"ContainerDied","Data":"2a3b650f2567dc451027a9ee4e884c29086a6c198341b0ff1aad8cab2b7a538c"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.740884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99ff95c78-p6wd9" event={"ID":"0b61fef6-46f1-4197-9eef-c6fa330e5fef","Type":"ContainerStarted","Data":"7a68d629e5d4a099bdcbf7a0ad086cc31984616c233d3c43845192be84303049"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.740918 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99ff95c78-p6wd9" event={"ID":"0b61fef6-46f1-4197-9eef-c6fa330e5fef","Type":"ContainerStarted","Data":"204a71e35621245a2a52081f4a6d75f90c3b70bce73dc68fbd4d1e29aa0b42b8"} Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.741022 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.742635 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-4d766" podStartSLOduration=3.742621351 podStartE2EDuration="3.742621351s" podCreationTimestamp="2026-02-16 21:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:54.737589413 +0000 UTC m=+1052.666885351" watchObservedRunningTime="2026-02-16 21:13:54.742621351 +0000 UTC m=+1052.671917289" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.830155 4811 scope.go:117] "RemoveContainer" containerID="5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.832394 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-99ff95c78-p6wd9" podStartSLOduration=3.832356054 podStartE2EDuration="3.832356054s" podCreationTimestamp="2026-02-16 21:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:54.807776676 +0000 UTC m=+1052.737072624" watchObservedRunningTime="2026-02-16 21:13:54.832356054 +0000 UTC m=+1052.761651992" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.918440 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ns5q9"] Feb 16 21:13:54 crc kubenswrapper[4811]: W0216 21:13:54.930568 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef08d7ef_0bd9_4126_bd7b_0d46b646be40.slice/crio-1f2babe0f72266a5a14e5ba1cae8cc3a4d0b5a1a64c9fa744767e4f2e201a33d WatchSource:0}: Error finding container 1f2babe0f72266a5a14e5ba1cae8cc3a4d0b5a1a64c9fa744767e4f2e201a33d: Status 404 returned error can't find the container with id 1f2babe0f72266a5a14e5ba1cae8cc3a4d0b5a1a64c9fa744767e4f2e201a33d Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.945718 4811 scope.go:117] "RemoveContainer" containerID="44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d" Feb 16 21:13:54 crc kubenswrapper[4811]: E0216 21:13:54.948680 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d\": container with ID starting with 44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d not found: ID does not exist" containerID="44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.948739 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d"} err="failed to get container status \"44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d\": rpc error: code = NotFound desc = could not find container \"44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d\": container with ID starting with 44414ee3d75a78616f0a8eeb76050aff682141e6038f68cecae419bd250b3c4d not found: ID does not exist" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.948769 4811 scope.go:117] "RemoveContainer" containerID="5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9" Feb 16 21:13:54 crc kubenswrapper[4811]: E0216 21:13:54.951810 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9\": container with ID starting with 5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9 not found: ID does not exist" containerID="5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.951869 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9"} err="failed to get container status \"5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9\": rpc error: code = NotFound desc = could not find container \"5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9\": container with ID starting with 5d49a8837b85df4dd6632a943174d390ec6a97beb7094df212cf8523be382ef9 not found: ID does not exist" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.951902 4811 scope.go:117] "RemoveContainer" containerID="68dd7af53e7cd3992564eb482e37d28ad75a5f24434c844c2e46419c47202659" Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.953528 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ns5q9"] Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.960742 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.969777 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.979329 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:54 crc kubenswrapper[4811]: I0216 21:13:54.995326 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.001638 4811 scope.go:117] "RemoveContainer" containerID="0b59471480a149e0e015b58e9115a0f4ac93daa40d92e24f3cea91d809067c53" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.008895 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b9874789f-2tq4q"] Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.017717 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:55 crc kubenswrapper[4811]: E0216 21:13:55.018163 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerName="dnsmasq-dns" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.018177 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerName="dnsmasq-dns" Feb 16 21:13:55 crc kubenswrapper[4811]: E0216 21:13:55.018207 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerName="init" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.018213 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerName="init" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.018377 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" containerName="dnsmasq-dns" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.019404 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.021907 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.022180 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.022891 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.023411 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lkdd5" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.025703 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.027106 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.030007 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.030149 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.047707 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.064536 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.133867 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.133922 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvljs\" (UniqueName: \"kubernetes.io/projected/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-kube-api-access-zvljs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.133951 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-logs\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.133979 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134007 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134037 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134064 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99tk\" (UniqueName: \"kubernetes.io/projected/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-kube-api-access-d99tk\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-logs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134210 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134237 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134267 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134298 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.134326 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.197426 4811 scope.go:117] "RemoveContainer" containerID="bc903609e7754f8103e3e4fcb3981c96e7871ecbb3117d9e5ccd56fd1ac91ad6" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.236486 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99tk\" (UniqueName: \"kubernetes.io/projected/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-kube-api-access-d99tk\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.236520 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.236546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.236572 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.236594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-logs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237345 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237643 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvljs\" (UniqueName: \"kubernetes.io/projected/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-kube-api-access-zvljs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237667 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-logs\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237752 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237935 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.237995 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.238056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.238131 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-logs\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.238499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-logs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.239083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.240585 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.240634 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/869535b9c27ca8a569925eb99ba7bc75347069a54c745c93c6e314aa9f1a2c6c/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.240715 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.240787 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4be36762ec81406c6a6e28b128f06340bf885474247138da4e2187429bf9f1df/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.242662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.243462 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.248910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.249182 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.251918 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.252291 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.252750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.258497 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.261133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvljs\" (UniqueName: \"kubernetes.io/projected/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-kube-api-access-zvljs\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.262044 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99tk\" (UniqueName: \"kubernetes.io/projected/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-kube-api-access-d99tk\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.293359 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.293637 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.351914 4811 scope.go:117] "RemoveContainer" containerID="8958ac521317f11ccae29dbd21a8345e663055edfbc680f2afb24015f44250e1" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.373992 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.407452 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.413020 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.543667 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcx4c\" (UniqueName: \"kubernetes.io/projected/3518c12b-e37a-4c8d-bbb5-c84f79d45948-kube-api-access-bcx4c\") pod \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.543801 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3518c12b-e37a-4c8d-bbb5-c84f79d45948-logs\") pod \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.543858 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-config-data\") pod \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.544002 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-scripts\") pod \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.544120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-combined-ca-bundle\") pod \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\" (UID: \"3518c12b-e37a-4c8d-bbb5-c84f79d45948\") " Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.544881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3518c12b-e37a-4c8d-bbb5-c84f79d45948-logs" (OuterVolumeSpecName: "logs") pod "3518c12b-e37a-4c8d-bbb5-c84f79d45948" (UID: "3518c12b-e37a-4c8d-bbb5-c84f79d45948"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.548265 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3518c12b-e37a-4c8d-bbb5-c84f79d45948-kube-api-access-bcx4c" (OuterVolumeSpecName: "kube-api-access-bcx4c") pod "3518c12b-e37a-4c8d-bbb5-c84f79d45948" (UID: "3518c12b-e37a-4c8d-bbb5-c84f79d45948"). InnerVolumeSpecName "kube-api-access-bcx4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.552363 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-scripts" (OuterVolumeSpecName: "scripts") pod "3518c12b-e37a-4c8d-bbb5-c84f79d45948" (UID: "3518c12b-e37a-4c8d-bbb5-c84f79d45948"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.580451 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-config-data" (OuterVolumeSpecName: "config-data") pod "3518c12b-e37a-4c8d-bbb5-c84f79d45948" (UID: "3518c12b-e37a-4c8d-bbb5-c84f79d45948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.598393 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3518c12b-e37a-4c8d-bbb5-c84f79d45948" (UID: "3518c12b-e37a-4c8d-bbb5-c84f79d45948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.648590 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcx4c\" (UniqueName: \"kubernetes.io/projected/3518c12b-e37a-4c8d-bbb5-c84f79d45948-kube-api-access-bcx4c\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.648628 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3518c12b-e37a-4c8d-bbb5-c84f79d45948-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.648638 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.648648 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.648658 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3518c12b-e37a-4c8d-bbb5-c84f79d45948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.755656 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f984f4f8d-xr8xc"] Feb 16 21:13:55 crc kubenswrapper[4811]: E0216 21:13:55.756231 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3518c12b-e37a-4c8d-bbb5-c84f79d45948" containerName="placement-db-sync" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.756245 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3518c12b-e37a-4c8d-bbb5-c84f79d45948" containerName="placement-db-sync" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.756506 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3518c12b-e37a-4c8d-bbb5-c84f79d45948" containerName="placement-db-sync" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.759476 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.767024 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.767223 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.770239 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9874789f-2tq4q" event={"ID":"ef08d7ef-0bd9-4126-bd7b-0d46b646be40","Type":"ContainerStarted","Data":"f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada"} Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.770275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9874789f-2tq4q" event={"ID":"ef08d7ef-0bd9-4126-bd7b-0d46b646be40","Type":"ContainerStarted","Data":"1f2babe0f72266a5a14e5ba1cae8cc3a4d0b5a1a64c9fa744767e4f2e201a33d"} Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.773643 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f984f4f8d-xr8xc"] Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.800555 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8xm4f" event={"ID":"3518c12b-e37a-4c8d-bbb5-c84f79d45948","Type":"ContainerDied","Data":"653053a168c0501aeeacdd45108fc99f1095d7c6b2c08c0d28f3eafb53592f4b"} Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.800597 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653053a168c0501aeeacdd45108fc99f1095d7c6b2c08c0d28f3eafb53592f4b" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.800672 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8xm4f" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.852070 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-config-data\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.852169 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-internal-tls-certs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.853257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmhw\" (UniqueName: \"kubernetes.io/projected/0b20ea8e-53de-433a-8739-88f1da6a3af5-kube-api-access-csmhw\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.853365 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b20ea8e-53de-433a-8739-88f1da6a3af5-logs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.853471 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-scripts\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.853552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-public-tls-certs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.853604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-combined-ca-bundle\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-internal-tls-certs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmhw\" (UniqueName: \"kubernetes.io/projected/0b20ea8e-53de-433a-8739-88f1da6a3af5-kube-api-access-csmhw\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955540 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b20ea8e-53de-433a-8739-88f1da6a3af5-logs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955582 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-scripts\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955620 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-public-tls-certs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955646 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-combined-ca-bundle\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.955688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-config-data\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.960506 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b20ea8e-53de-433a-8739-88f1da6a3af5-logs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.963518 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-public-tls-certs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.963837 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-scripts\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.964184 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-combined-ca-bundle\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.972090 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-config-data\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.973957 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-internal-tls-certs\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:55 crc kubenswrapper[4811]: I0216 21:13:55.978798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmhw\" (UniqueName: \"kubernetes.io/projected/0b20ea8e-53de-433a-8739-88f1da6a3af5-kube-api-access-csmhw\") pod \"placement-5f984f4f8d-xr8xc\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.092637 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.123975 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.629064 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f984f4f8d-xr8xc"] Feb 16 21:13:56 crc kubenswrapper[4811]: W0216 21:13:56.651533 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b20ea8e_53de_433a_8739_88f1da6a3af5.slice/crio-e919d29ade777d8d421c57a91a323e57a802794dd196a9c8fb124b354b4fa856 WatchSource:0}: Error finding container e919d29ade777d8d421c57a91a323e57a802794dd196a9c8fb124b354b4fa856: Status 404 returned error can't find the container with id e919d29ade777d8d421c57a91a323e57a802794dd196a9c8fb124b354b4fa856 Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.720181 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0927a3bc-88fd-46ce-b4e7-43ed2a083e2c" path="/var/lib/kubelet/pods/0927a3bc-88fd-46ce-b4e7-43ed2a083e2c/volumes" Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.724146 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edfbd65-bed1-4d1b-86ae-d7fefa3e4432" path="/var/lib/kubelet/pods/1edfbd65-bed1-4d1b-86ae-d7fefa3e4432/volumes" Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.725070 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae37545-788b-495d-b91b-01e7fa6cd250" path="/var/lib/kubelet/pods/dae37545-788b-495d-b91b-01e7fa6cd250/volumes" Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.829805 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f984f4f8d-xr8xc" event={"ID":"0b20ea8e-53de-433a-8739-88f1da6a3af5","Type":"ContainerStarted","Data":"e919d29ade777d8d421c57a91a323e57a802794dd196a9c8fb124b354b4fa856"} Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.834148 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5","Type":"ContainerStarted","Data":"75f7a5c981de699fc3ff79e90f3c543272c09e24d25b079c78dc5c6984d0e922"} Feb 16 21:13:56 crc kubenswrapper[4811]: I0216 21:13:56.967519 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:13:56 crc kubenswrapper[4811]: W0216 21:13:56.972105 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a1f6a9_4084_46c9_be98_b2a8f2a98a21.slice/crio-1071ab1cd1bb6da7f3bc4f37ff9f40218474a25bdbbe07bb6bdc45724a83ab24 WatchSource:0}: Error finding container 1071ab1cd1bb6da7f3bc4f37ff9f40218474a25bdbbe07bb6bdc45724a83ab24: Status 404 returned error can't find the container with id 1071ab1cd1bb6da7f3bc4f37ff9f40218474a25bdbbe07bb6bdc45724a83ab24 Feb 16 21:13:57 crc kubenswrapper[4811]: E0216 21:13:57.713583 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.769726 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.802660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxjfq\" (UniqueName: \"kubernetes.io/projected/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-kube-api-access-cxjfq\") pod \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.802708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-credential-keys\") pod \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.802775 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-scripts\") pod \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.802909 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-fernet-keys\") pod \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.803063 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-combined-ca-bundle\") pod \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.803100 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-config-data\") pod \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\" (UID: \"0da681f8-0bc1-49c1-b1ae-82ec13f671e1\") " Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.810382 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-scripts" (OuterVolumeSpecName: "scripts") pod "0da681f8-0bc1-49c1-b1ae-82ec13f671e1" (UID: "0da681f8-0bc1-49c1-b1ae-82ec13f671e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.810405 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-kube-api-access-cxjfq" (OuterVolumeSpecName: "kube-api-access-cxjfq") pod "0da681f8-0bc1-49c1-b1ae-82ec13f671e1" (UID: "0da681f8-0bc1-49c1-b1ae-82ec13f671e1"). InnerVolumeSpecName "kube-api-access-cxjfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.818480 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0da681f8-0bc1-49c1-b1ae-82ec13f671e1" (UID: "0da681f8-0bc1-49c1-b1ae-82ec13f671e1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.832312 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0da681f8-0bc1-49c1-b1ae-82ec13f671e1" (UID: "0da681f8-0bc1-49c1-b1ae-82ec13f671e1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.849220 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9874789f-2tq4q" event={"ID":"ef08d7ef-0bd9-4126-bd7b-0d46b646be40","Type":"ContainerStarted","Data":"f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114"} Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.851747 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6k27" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.851868 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6k27" event={"ID":"0da681f8-0bc1-49c1-b1ae-82ec13f671e1","Type":"ContainerDied","Data":"10c15382679c93e2bbd387e5007beec44608748aa8bf762de7afba6c5dc464a4"} Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.851924 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c15382679c93e2bbd387e5007beec44608748aa8bf762de7afba6c5dc464a4" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.858030 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"13a1f6a9-4084-46c9-be98-b2a8f2a98a21","Type":"ContainerStarted","Data":"1071ab1cd1bb6da7f3bc4f37ff9f40218474a25bdbbe07bb6bdc45724a83ab24"} Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.862444 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da681f8-0bc1-49c1-b1ae-82ec13f671e1" (UID: "0da681f8-0bc1-49c1-b1ae-82ec13f671e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.871230 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-config-data" (OuterVolumeSpecName: "config-data") pod "0da681f8-0bc1-49c1-b1ae-82ec13f671e1" (UID: "0da681f8-0bc1-49c1-b1ae-82ec13f671e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.905106 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.905125 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.905135 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.905145 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.905175 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxjfq\" (UniqueName: \"kubernetes.io/projected/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-kube-api-access-cxjfq\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:57 crc kubenswrapper[4811]: I0216 21:13:57.905184 4811 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0da681f8-0bc1-49c1-b1ae-82ec13f671e1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.877489 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f984f4f8d-xr8xc" event={"ID":"0b20ea8e-53de-433a-8739-88f1da6a3af5","Type":"ContainerStarted","Data":"7f57acf64a906fee962c584307b5fbbabe4c24fd6f58ddcdad9851f0aad15a41"} Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.879592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5","Type":"ContainerStarted","Data":"0435fc64d5d23e794f36b75641c72c486ab505052d2c63966a9f397382a079b1"} Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.885380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"13a1f6a9-4084-46c9-be98-b2a8f2a98a21","Type":"ContainerStarted","Data":"151f8e84f148fca50c002bb8cb351ef4dbdbecd8430f19603ed7c116c64706c2"} Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.885707 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.914518 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b9874789f-2tq4q" podStartSLOduration=5.914500529 podStartE2EDuration="5.914500529s" podCreationTimestamp="2026-02-16 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:58.909996724 +0000 UTC m=+1056.839292662" watchObservedRunningTime="2026-02-16 21:13:58.914500529 +0000 UTC m=+1056.843796467" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.941177 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bf9c6cdb6-77vqw"] Feb 16 21:13:58 crc kubenswrapper[4811]: E0216 21:13:58.941594 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da681f8-0bc1-49c1-b1ae-82ec13f671e1" containerName="keystone-bootstrap" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.941611 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da681f8-0bc1-49c1-b1ae-82ec13f671e1" containerName="keystone-bootstrap" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.941818 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da681f8-0bc1-49c1-b1ae-82ec13f671e1" containerName="keystone-bootstrap" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.942622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.946439 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.946553 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s2qbh" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.946652 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.947004 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.947756 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.950141 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 21:13:58 crc kubenswrapper[4811]: I0216 21:13:58.972026 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bf9c6cdb6-77vqw"] Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023567 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-combined-ca-bundle\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-fernet-keys\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-internal-tls-certs\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023760 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-credential-keys\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023778 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-scripts\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023794 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-public-tls-certs\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.023835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-config-data\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.024053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdnzl\" (UniqueName: \"kubernetes.io/projected/f4009257-0fad-4d48-b144-6faf80ea5e0c-kube-api-access-tdnzl\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125570 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-credential-keys\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125627 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-scripts\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-public-tls-certs\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125718 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-config-data\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125769 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdnzl\" (UniqueName: \"kubernetes.io/projected/f4009257-0fad-4d48-b144-6faf80ea5e0c-kube-api-access-tdnzl\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125819 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-combined-ca-bundle\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125867 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-fernet-keys\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.125923 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-internal-tls-certs\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.132887 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-public-tls-certs\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.133514 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-config-data\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.134049 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-credential-keys\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.134251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-internal-tls-certs\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.136391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-combined-ca-bundle\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.137583 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-fernet-keys\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.143583 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4009257-0fad-4d48-b144-6faf80ea5e0c-scripts\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.161718 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdnzl\" (UniqueName: \"kubernetes.io/projected/f4009257-0fad-4d48-b144-6faf80ea5e0c-kube-api-access-tdnzl\") pod \"keystone-7bf9c6cdb6-77vqw\" (UID: \"f4009257-0fad-4d48-b144-6faf80ea5e0c\") " pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:13:59 crc kubenswrapper[4811]: I0216 21:13:59.259748 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.397694 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bf9c6cdb6-77vqw"] Feb 16 21:14:01 crc kubenswrapper[4811]: W0216 21:14:01.409314 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4009257_0fad_4d48_b144_6faf80ea5e0c.slice/crio-77e6da677d0191d0cc055eb8d1b4a3ca2ba92d2218ee0a5ca5dad70238f00ea4 WatchSource:0}: Error finding container 77e6da677d0191d0cc055eb8d1b4a3ca2ba92d2218ee0a5ca5dad70238f00ea4: Status 404 returned error can't find the container with id 77e6da677d0191d0cc055eb8d1b4a3ca2ba92d2218ee0a5ca5dad70238f00ea4 Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.948162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f984f4f8d-xr8xc" event={"ID":"0b20ea8e-53de-433a-8739-88f1da6a3af5","Type":"ContainerStarted","Data":"529f36854fdaf3d5a4dc41bb0b3da42100c9da10fd73920867e2036772820eb1"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.948653 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.948672 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.955367 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5","Type":"ContainerStarted","Data":"0b1b9e66c581157ec0655effc752edb74d35f7ff3f8bff1467108dfe4ef8b1e5"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.957417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-njvjn" event={"ID":"89a1f359-cb47-470b-ad6e-48d11efacfce","Type":"ContainerStarted","Data":"ef53e4e24d7f3df32bca85abb72d8ba1aaa1129c3399026269d17c671af18e3f"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.960336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"13a1f6a9-4084-46c9-be98-b2a8f2a98a21","Type":"ContainerStarted","Data":"2791258945c3659b76f922d401532a57d765156bd11706358c7d4e54e3e96c97"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.962387 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bf9c6cdb6-77vqw" event={"ID":"f4009257-0fad-4d48-b144-6faf80ea5e0c","Type":"ContainerStarted","Data":"9b4701a99cc8acfd723beb799aed125fada534999653656b64f607230983bb8d"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.962431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bf9c6cdb6-77vqw" event={"ID":"f4009257-0fad-4d48-b144-6faf80ea5e0c","Type":"ContainerStarted","Data":"77e6da677d0191d0cc055eb8d1b4a3ca2ba92d2218ee0a5ca5dad70238f00ea4"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.962905 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.965950 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerStarted","Data":"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89"} Feb 16 21:14:01 crc kubenswrapper[4811]: I0216 21:14:01.994180 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f984f4f8d-xr8xc" podStartSLOduration=6.994162735 podStartE2EDuration="6.994162735s" podCreationTimestamp="2026-02-16 21:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:01.984353284 +0000 UTC m=+1059.913649222" watchObservedRunningTime="2026-02-16 21:14:01.994162735 +0000 UTC m=+1059.923458673" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.010611 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-njvjn" podStartSLOduration=2.3501625219999998 podStartE2EDuration="36.010593434s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="2026-02-16 21:13:27.572183567 +0000 UTC m=+1025.501479505" lastFinishedPulling="2026-02-16 21:14:01.232614479 +0000 UTC m=+1059.161910417" observedRunningTime="2026-02-16 21:14:02.000045155 +0000 UTC m=+1059.929341093" watchObservedRunningTime="2026-02-16 21:14:02.010593434 +0000 UTC m=+1059.939889372" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.023961 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bf9c6cdb6-77vqw" podStartSLOduration=4.023939585 podStartE2EDuration="4.023939585s" podCreationTimestamp="2026-02-16 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:02.016768582 +0000 UTC m=+1059.946064530" watchObservedRunningTime="2026-02-16 21:14:02.023939585 +0000 UTC m=+1059.953235523" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.043354 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.045040 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.045030074 podStartE2EDuration="8.045030074s" podCreationTimestamp="2026-02-16 21:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:02.037622335 +0000 UTC m=+1059.966918273" watchObservedRunningTime="2026-02-16 21:14:02.045030074 +0000 UTC m=+1059.974326012" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.061690 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.061668799 podStartE2EDuration="8.061668799s" podCreationTimestamp="2026-02-16 21:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:02.054372833 +0000 UTC m=+1059.983668771" watchObservedRunningTime="2026-02-16 21:14:02.061668799 +0000 UTC m=+1059.990964727" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.119772 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cdcch"] Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.119963 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="dnsmasq-dns" containerID="cri-o://d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360" gracePeriod=10 Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.729438 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.910095 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-dns-svc\") pod \"4bf21953-4d87-4a23-a09a-454e12365b71\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.910163 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9sw2\" (UniqueName: \"kubernetes.io/projected/4bf21953-4d87-4a23-a09a-454e12365b71-kube-api-access-g9sw2\") pod \"4bf21953-4d87-4a23-a09a-454e12365b71\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.910631 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-config\") pod \"4bf21953-4d87-4a23-a09a-454e12365b71\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.910730 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-sb\") pod \"4bf21953-4d87-4a23-a09a-454e12365b71\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.910759 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-nb\") pod \"4bf21953-4d87-4a23-a09a-454e12365b71\" (UID: \"4bf21953-4d87-4a23-a09a-454e12365b71\") " Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.917388 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf21953-4d87-4a23-a09a-454e12365b71-kube-api-access-g9sw2" (OuterVolumeSpecName: "kube-api-access-g9sw2") pod "4bf21953-4d87-4a23-a09a-454e12365b71" (UID: "4bf21953-4d87-4a23-a09a-454e12365b71"). InnerVolumeSpecName "kube-api-access-g9sw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:02 crc kubenswrapper[4811]: I0216 21:14:02.993085 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bf21953-4d87-4a23-a09a-454e12365b71" (UID: "4bf21953-4d87-4a23-a09a-454e12365b71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.001298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bf21953-4d87-4a23-a09a-454e12365b71" (UID: "4bf21953-4d87-4a23-a09a-454e12365b71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.011101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-config" (OuterVolumeSpecName: "config") pod "4bf21953-4d87-4a23-a09a-454e12365b71" (UID: "4bf21953-4d87-4a23-a09a-454e12365b71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.012225 4811 generic.go:334] "Generic (PLEG): container finished" podID="4bf21953-4d87-4a23-a09a-454e12365b71" containerID="d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360" exitCode=0 Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.012369 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.012524 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" event={"ID":"4bf21953-4d87-4a23-a09a-454e12365b71","Type":"ContainerDied","Data":"d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360"} Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.012567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" event={"ID":"4bf21953-4d87-4a23-a09a-454e12365b71","Type":"ContainerDied","Data":"83d82787994c6cf6bf7c982adf0d2518114372464a968544ce10f3dda5769ada"} Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.012586 4811 scope.go:117] "RemoveContainer" containerID="d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.013843 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.013961 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.014033 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.014294 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9sw2\" (UniqueName: \"kubernetes.io/projected/4bf21953-4d87-4a23-a09a-454e12365b71-kube-api-access-g9sw2\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.020475 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bf21953-4d87-4a23-a09a-454e12365b71" (UID: "4bf21953-4d87-4a23-a09a-454e12365b71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.117701 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf21953-4d87-4a23-a09a-454e12365b71-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.119739 4811 scope.go:117] "RemoveContainer" containerID="95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.154075 4811 scope.go:117] "RemoveContainer" containerID="d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360" Feb 16 21:14:03 crc kubenswrapper[4811]: E0216 21:14:03.154592 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360\": container with ID starting with d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360 not found: ID does not exist" containerID="d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.154623 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360"} err="failed to get container status \"d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360\": rpc error: code = NotFound desc = could not find container \"d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360\": container with ID starting with d6aa80d51a3632fb715d82dafa67b03dd33fad62e372441c06b7cbafa75a6360 not found: ID does not exist" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.154643 4811 scope.go:117] "RemoveContainer" containerID="95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f" Feb 16 21:14:03 crc kubenswrapper[4811]: E0216 21:14:03.155050 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f\": container with ID starting with 95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f not found: ID does not exist" containerID="95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.155094 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f"} err="failed to get container status \"95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f\": rpc error: code = NotFound desc = could not find container \"95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f\": container with ID starting with 95ba0b6409f9c8172829d1bc05fb07c77db0e8506218f13fccc9511b57027a1f not found: ID does not exist" Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.346718 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cdcch"] Feb 16 21:14:03 crc kubenswrapper[4811]: I0216 21:14:03.354304 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cdcch"] Feb 16 21:14:04 crc kubenswrapper[4811]: I0216 21:14:04.031044 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qv84d" event={"ID":"6a07ef56-cd30-4652-9fdd-65279e9b5fb5","Type":"ContainerStarted","Data":"8fccd4c81ddc7e2103b2629665ee09572db2bf909e3ea0b3308d182738847222"} Feb 16 21:14:04 crc kubenswrapper[4811]: I0216 21:14:04.059747 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qv84d" podStartSLOduration=3.514174659 podStartE2EDuration="38.059729863s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="2026-02-16 21:13:27.57190753 +0000 UTC m=+1025.501203468" lastFinishedPulling="2026-02-16 21:14:02.117462744 +0000 UTC m=+1060.046758672" observedRunningTime="2026-02-16 21:14:04.051152843 +0000 UTC m=+1061.980448791" watchObservedRunningTime="2026-02-16 21:14:04.059729863 +0000 UTC m=+1061.989025801" Feb 16 21:14:04 crc kubenswrapper[4811]: I0216 21:14:04.096907 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:14:04 crc kubenswrapper[4811]: I0216 21:14:04.724823 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" path="/var/lib/kubelet/pods/4bf21953-4d87-4a23-a09a-454e12365b71/volumes" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.047880 4811 generic.go:334] "Generic (PLEG): container finished" podID="89a1f359-cb47-470b-ad6e-48d11efacfce" containerID="ef53e4e24d7f3df32bca85abb72d8ba1aaa1129c3399026269d17c671af18e3f" exitCode=0 Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.048255 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-njvjn" event={"ID":"89a1f359-cb47-470b-ad6e-48d11efacfce","Type":"ContainerDied","Data":"ef53e4e24d7f3df32bca85abb72d8ba1aaa1129c3399026269d17c671af18e3f"} Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.375693 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.375999 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.408437 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.408498 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.408917 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.424216 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.452581 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:14:05 crc kubenswrapper[4811]: I0216 21:14:05.462800 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:14:06 crc kubenswrapper[4811]: I0216 21:14:06.062947 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:14:06 crc kubenswrapper[4811]: I0216 21:14:06.063145 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:06 crc kubenswrapper[4811]: I0216 21:14:06.063184 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:06 crc kubenswrapper[4811]: I0216 21:14:06.063219 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:14:07 crc kubenswrapper[4811]: I0216 21:14:07.667785 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-cdcch" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.084094 4811 generic.go:334] "Generic (PLEG): container finished" podID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" containerID="8fccd4c81ddc7e2103b2629665ee09572db2bf909e3ea0b3308d182738847222" exitCode=0 Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.084141 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qv84d" event={"ID":"6a07ef56-cd30-4652-9fdd-65279e9b5fb5","Type":"ContainerDied","Data":"8fccd4c81ddc7e2103b2629665ee09572db2bf909e3ea0b3308d182738847222"} Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.316607 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.316716 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.317083 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.317225 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.317289 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.448925 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.762570 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-njvjn" Feb 16 21:14:08 crc kubenswrapper[4811]: E0216 21:14:08.839938 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.839970 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d5t7\" (UniqueName: \"kubernetes.io/projected/89a1f359-cb47-470b-ad6e-48d11efacfce-kube-api-access-5d5t7\") pod \"89a1f359-cb47-470b-ad6e-48d11efacfce\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " Feb 16 21:14:08 crc kubenswrapper[4811]: E0216 21:14:08.839998 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:14:08 crc kubenswrapper[4811]: E0216 21:14:08.840139 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.840176 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-db-sync-config-data\") pod \"89a1f359-cb47-470b-ad6e-48d11efacfce\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.840256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-combined-ca-bundle\") pod \"89a1f359-cb47-470b-ad6e-48d11efacfce\" (UID: \"89a1f359-cb47-470b-ad6e-48d11efacfce\") " Feb 16 21:14:08 crc kubenswrapper[4811]: E0216 21:14:08.842164 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.846316 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "89a1f359-cb47-470b-ad6e-48d11efacfce" (UID: "89a1f359-cb47-470b-ad6e-48d11efacfce"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.846631 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a1f359-cb47-470b-ad6e-48d11efacfce-kube-api-access-5d5t7" (OuterVolumeSpecName: "kube-api-access-5d5t7") pod "89a1f359-cb47-470b-ad6e-48d11efacfce" (UID: "89a1f359-cb47-470b-ad6e-48d11efacfce"). InnerVolumeSpecName "kube-api-access-5d5t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.877491 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89a1f359-cb47-470b-ad6e-48d11efacfce" (UID: "89a1f359-cb47-470b-ad6e-48d11efacfce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.942588 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d5t7\" (UniqueName: \"kubernetes.io/projected/89a1f359-cb47-470b-ad6e-48d11efacfce-kube-api-access-5d5t7\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.942622 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:08 crc kubenswrapper[4811]: I0216 21:14:08.942631 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1f359-cb47-470b-ad6e-48d11efacfce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.098648 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-njvjn" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.098677 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-njvjn" event={"ID":"89a1f359-cb47-470b-ad6e-48d11efacfce","Type":"ContainerDied","Data":"cf4813fd3d681853193b7e0066dd95991615b849f0c39f442a9491cac2b0e39c"} Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.098742 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4813fd3d681853193b7e0066dd95991615b849f0c39f442a9491cac2b0e39c" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.438770 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qv84d" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-etc-machine-id\") pod \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557463 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-config-data\") pod \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557455 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a07ef56-cd30-4652-9fdd-65279e9b5fb5" (UID: "6a07ef56-cd30-4652-9fdd-65279e9b5fb5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557483 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-db-sync-config-data\") pod \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557626 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-combined-ca-bundle\") pod \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557666 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-scripts\") pod \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.557760 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-882fb\" (UniqueName: \"kubernetes.io/projected/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-kube-api-access-882fb\") pod \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\" (UID: \"6a07ef56-cd30-4652-9fdd-65279e9b5fb5\") " Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.559248 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.561633 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-scripts" (OuterVolumeSpecName: "scripts") pod "6a07ef56-cd30-4652-9fdd-65279e9b5fb5" (UID: "6a07ef56-cd30-4652-9fdd-65279e9b5fb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.562323 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6a07ef56-cd30-4652-9fdd-65279e9b5fb5" (UID: "6a07ef56-cd30-4652-9fdd-65279e9b5fb5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.562909 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-kube-api-access-882fb" (OuterVolumeSpecName: "kube-api-access-882fb") pod "6a07ef56-cd30-4652-9fdd-65279e9b5fb5" (UID: "6a07ef56-cd30-4652-9fdd-65279e9b5fb5"). InnerVolumeSpecName "kube-api-access-882fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.583310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a07ef56-cd30-4652-9fdd-65279e9b5fb5" (UID: "6a07ef56-cd30-4652-9fdd-65279e9b5fb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.605459 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-config-data" (OuterVolumeSpecName: "config-data") pod "6a07ef56-cd30-4652-9fdd-65279e9b5fb5" (UID: "6a07ef56-cd30-4652-9fdd-65279e9b5fb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.660607 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-882fb\" (UniqueName: \"kubernetes.io/projected/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-kube-api-access-882fb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.660635 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.660644 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.660652 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:09 crc kubenswrapper[4811]: I0216 21:14:09.660661 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a07ef56-cd30-4652-9fdd-65279e9b5fb5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.119720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerStarted","Data":"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2"} Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.120970 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.120058 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="proxy-httpd" containerID="cri-o://be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2" gracePeriod=30 Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.119933 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-central-agent" containerID="cri-o://c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908" gracePeriod=30 Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.120413 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="sg-core" containerID="cri-o://dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89" gracePeriod=30 Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.120392 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-notification-agent" containerID="cri-o://56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59" gracePeriod=30 Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.133027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qv84d" event={"ID":"6a07ef56-cd30-4652-9fdd-65279e9b5fb5","Type":"ContainerDied","Data":"4549522a8b1cbdfd96ec2f891af8ec68207622ca88a4431eba19771e41d80e4c"} Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.133795 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4549522a8b1cbdfd96ec2f891af8ec68207622ca88a4431eba19771e41d80e4c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.133947 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qv84d" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.163300 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8547d7757d-rdzdb"] Feb 16 21:14:10 crc kubenswrapper[4811]: E0216 21:14:10.163860 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="init" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.163885 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="init" Feb 16 21:14:10 crc kubenswrapper[4811]: E0216 21:14:10.163913 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a1f359-cb47-470b-ad6e-48d11efacfce" containerName="barbican-db-sync" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.163922 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a1f359-cb47-470b-ad6e-48d11efacfce" containerName="barbican-db-sync" Feb 16 21:14:10 crc kubenswrapper[4811]: E0216 21:14:10.163950 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="dnsmasq-dns" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.163961 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="dnsmasq-dns" Feb 16 21:14:10 crc kubenswrapper[4811]: E0216 21:14:10.163979 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" containerName="cinder-db-sync" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.163987 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" containerName="cinder-db-sync" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.164229 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf21953-4d87-4a23-a09a-454e12365b71" containerName="dnsmasq-dns" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.164257 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" containerName="cinder-db-sync" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.164282 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a1f359-cb47-470b-ad6e-48d11efacfce" containerName="barbican-db-sync" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.165522 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.169639 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.169788 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wfvl6" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.170004 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.173090 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-combined-ca-bundle\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.173161 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfcebd2-4ec9-463d-9ce6-801911550f42-logs\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.173239 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-config-data\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.173258 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zz6m\" (UniqueName: \"kubernetes.io/projected/0dfcebd2-4ec9-463d-9ce6-801911550f42-kube-api-access-9zz6m\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.173290 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-config-data-custom\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.211346 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58f445f5bc-kgwdh"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.213272 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.217968 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.248624 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8547d7757d-rdzdb"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.259945 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58f445f5bc-kgwdh"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.262211 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.541707602 podStartE2EDuration="44.262181226s" podCreationTimestamp="2026-02-16 21:13:26 +0000 UTC" firstStartedPulling="2026-02-16 21:13:27.633393187 +0000 UTC m=+1025.562689125" lastFinishedPulling="2026-02-16 21:14:09.353866811 +0000 UTC m=+1067.283162749" observedRunningTime="2026-02-16 21:14:10.148884151 +0000 UTC m=+1068.078180089" watchObservedRunningTime="2026-02-16 21:14:10.262181226 +0000 UTC m=+1068.191477174" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.275579 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-combined-ca-bundle\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.275658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfcebd2-4ec9-463d-9ce6-801911550f42-logs\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.275717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-config-data\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.275736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zz6m\" (UniqueName: \"kubernetes.io/projected/0dfcebd2-4ec9-463d-9ce6-801911550f42-kube-api-access-9zz6m\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.275769 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-config-data-custom\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.278676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfcebd2-4ec9-463d-9ce6-801911550f42-logs\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.334729 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-combined-ca-bundle\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.341445 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-config-data\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.371600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dfcebd2-4ec9-463d-9ce6-801911550f42-config-data-custom\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.378355 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44198ae0-a1f3-4eee-bcba-4898da249e24-logs\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.378430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-combined-ca-bundle\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.378536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8b6\" (UniqueName: \"kubernetes.io/projected/44198ae0-a1f3-4eee-bcba-4898da249e24-kube-api-access-wk8b6\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.378590 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-config-data\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.378607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-config-data-custom\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.384305 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zz6m\" (UniqueName: \"kubernetes.io/projected/0dfcebd2-4ec9-463d-9ce6-801911550f42-kube-api-access-9zz6m\") pod \"barbican-keystone-listener-8547d7757d-rdzdb\" (UID: \"0dfcebd2-4ec9-463d-9ce6-801911550f42\") " pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.449685 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-bll6j"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.451383 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.479828 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.479898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44198ae0-a1f3-4eee-bcba-4898da249e24-logs\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.479923 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-config\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.479941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-combined-ca-bundle\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.479983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.480004 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.480019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4vc\" (UniqueName: \"kubernetes.io/projected/db5e3d04-029e-4180-9e25-479ce12a175d-kube-api-access-6t4vc\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.480057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8b6\" (UniqueName: \"kubernetes.io/projected/44198ae0-a1f3-4eee-bcba-4898da249e24-kube-api-access-wk8b6\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.480104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.480121 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-config-data\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.480136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-config-data-custom\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.485926 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44198ae0-a1f3-4eee-bcba-4898da249e24-logs\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.505033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-config-data-custom\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.507386 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-combined-ca-bundle\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.509335 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44198ae0-a1f3-4eee-bcba-4898da249e24-config-data\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.520587 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-bll6j"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.521026 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.547161 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8b6\" (UniqueName: \"kubernetes.io/projected/44198ae0-a1f3-4eee-bcba-4898da249e24-kube-api-access-wk8b6\") pod \"barbican-worker-58f445f5bc-kgwdh\" (UID: \"44198ae0-a1f3-4eee-bcba-4898da249e24\") " pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.582016 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-config\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.582124 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.582162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.582187 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4vc\" (UniqueName: \"kubernetes.io/projected/db5e3d04-029e-4180-9e25-479ce12a175d-kube-api-access-6t4vc\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.582327 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.582427 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.583496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.584248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.584956 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-config\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.586386 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.604176 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b5f79c58b-j4b9c"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.606538 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.606635 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.617675 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.666294 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58f445f5bc-kgwdh" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.668727 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5f79c58b-j4b9c"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.669878 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t4vc\" (UniqueName: \"kubernetes.io/projected/db5e3d04-029e-4180-9e25-479ce12a175d-kube-api-access-6t4vc\") pod \"dnsmasq-dns-848cf88cfc-bll6j\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.688173 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-combined-ca-bundle\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.688430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwssf\" (UniqueName: \"kubernetes.io/projected/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-kube-api-access-nwssf\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.688580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data-custom\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.688749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.688848 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-logs\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.768261 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.769906 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.782397 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5x9lq" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.782632 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.783233 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.783358 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.790749 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.790798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-logs\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.790920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-combined-ca-bundle\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.791009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwssf\" (UniqueName: \"kubernetes.io/projected/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-kube-api-access-nwssf\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.791035 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data-custom\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.793324 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-logs\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.797838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-combined-ca-bundle\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.799588 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.799961 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.803108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data-custom\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.804721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.843248 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-bll6j"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.850733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwssf\" (UniqueName: \"kubernetes.io/projected/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-kube-api-access-nwssf\") pod \"barbican-api-7b5f79c58b-j4b9c\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893231 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8020087b-01d7-425a-84fc-dd7e9278f4d2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893720 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-scripts\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxnx\" (UniqueName: \"kubernetes.io/projected/8020087b-01d7-425a-84fc-dd7e9278f4d2-kube-api-access-2qxnx\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.893869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.933252 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wk5kr"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.935042 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.959018 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wk5kr"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.987334 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.989029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.992086 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995682 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-scripts\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995763 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995810 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995936 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995974 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j855z\" (UniqueName: \"kubernetes.io/projected/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-kube-api-access-j855z\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.995995 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxnx\" (UniqueName: \"kubernetes.io/projected/8020087b-01d7-425a-84fc-dd7e9278f4d2-kube-api-access-2qxnx\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.996025 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.996049 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.996074 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-config\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.996106 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8020087b-01d7-425a-84fc-dd7e9278f4d2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.996161 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.999373 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 21:14:10 crc kubenswrapper[4811]: I0216 21:14:10.999627 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8020087b-01d7-425a-84fc-dd7e9278f4d2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.009345 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.019928 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.022096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.026677 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-scripts\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.031541 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxnx\" (UniqueName: \"kubernetes.io/projected/8020087b-01d7-425a-84fc-dd7e9278f4d2-kube-api-access-2qxnx\") pod \"cinder-scheduler-0\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098577 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098595 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data-custom\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098622 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098652 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24fdb0a1-a616-4ed9-b106-f7de8952a77a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j855z\" (UniqueName: \"kubernetes.io/projected/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-kube-api-access-j855z\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098726 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fdb0a1-a616-4ed9-b106-f7de8952a77a-logs\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098754 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frw9\" (UniqueName: \"kubernetes.io/projected/24fdb0a1-a616-4ed9-b106-f7de8952a77a-kube-api-access-2frw9\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098791 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-config\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098812 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-scripts\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.098940 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.099857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.100016 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.104044 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.104314 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.106176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-config\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.118020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j855z\" (UniqueName: \"kubernetes.io/projected/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-kube-api-access-j855z\") pod \"dnsmasq-dns-6578955fd5-wk5kr\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.162856 4811 generic.go:334] "Generic (PLEG): container finished" podID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerID="be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2" exitCode=0 Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.162882 4811 generic.go:334] "Generic (PLEG): container finished" podID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerID="dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89" exitCode=2 Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.162901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerDied","Data":"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2"} Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.162927 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerDied","Data":"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89"} Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24fdb0a1-a616-4ed9-b106-f7de8952a77a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200540 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fdb0a1-a616-4ed9-b106-f7de8952a77a-logs\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200562 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2frw9\" (UniqueName: \"kubernetes.io/projected/24fdb0a1-a616-4ed9-b106-f7de8952a77a-kube-api-access-2frw9\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200590 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-scripts\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200632 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24fdb0a1-a616-4ed9-b106-f7de8952a77a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200702 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.200765 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data-custom\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.201033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fdb0a1-a616-4ed9-b106-f7de8952a77a-logs\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.204769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data-custom\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.206114 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-scripts\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.206370 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.210662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.216349 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2frw9\" (UniqueName: \"kubernetes.io/projected/24fdb0a1-a616-4ed9-b106-f7de8952a77a-kube-api-access-2frw9\") pod \"cinder-api-0\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.293702 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.318520 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.324089 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8547d7757d-rdzdb"] Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.335256 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.496398 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58f445f5bc-kgwdh"] Feb 16 21:14:11 crc kubenswrapper[4811]: W0216 21:14:11.517023 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44198ae0_a1f3_4eee_bcba_4898da249e24.slice/crio-2d6f6f5ae38c37d3aea9477bb8464caf24729e13c84ea1c0b23ac13db2dcc455 WatchSource:0}: Error finding container 2d6f6f5ae38c37d3aea9477bb8464caf24729e13c84ea1c0b23ac13db2dcc455: Status 404 returned error can't find the container with id 2d6f6f5ae38c37d3aea9477bb8464caf24729e13c84ea1c0b23ac13db2dcc455 Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.577613 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5f79c58b-j4b9c"] Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.598790 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-bll6j"] Feb 16 21:14:11 crc kubenswrapper[4811]: W0216 21:14:11.601740 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb5e3d04_029e_4180_9e25_479ce12a175d.slice/crio-ca875f9deaeee61453b0b451457a6678f3d3807f7a1bfbf3e5b0dcfa9537fdd5 WatchSource:0}: Error finding container ca875f9deaeee61453b0b451457a6678f3d3807f7a1bfbf3e5b0dcfa9537fdd5: Status 404 returned error can't find the container with id ca875f9deaeee61453b0b451457a6678f3d3807f7a1bfbf3e5b0dcfa9537fdd5 Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.831893 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.867349 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.934833 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-log-httpd\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.936962 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxzw2\" (UniqueName: \"kubernetes.io/projected/18bbdf69-d815-49b8-a29d-8b90a8e2987f-kube-api-access-nxzw2\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.937129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-config-data\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.937221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-sg-core-conf-yaml\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.937259 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-run-httpd\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.937351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-combined-ca-bundle\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.937384 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-scripts\") pod \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\" (UID: \"18bbdf69-d815-49b8-a29d-8b90a8e2987f\") " Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.935245 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.943124 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.948768 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-scripts" (OuterVolumeSpecName: "scripts") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.958617 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18bbdf69-d815-49b8-a29d-8b90a8e2987f-kube-api-access-nxzw2" (OuterVolumeSpecName: "kube-api-access-nxzw2") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "kube-api-access-nxzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.989002 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:11 crc kubenswrapper[4811]: I0216 21:14:11.996626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.006978 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wk5kr"] Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.039697 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.039724 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.039736 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.039746 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18bbdf69-d815-49b8-a29d-8b90a8e2987f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.039756 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxzw2\" (UniqueName: \"kubernetes.io/projected/18bbdf69-d815-49b8-a29d-8b90a8e2987f-kube-api-access-nxzw2\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: W0216 21:14:12.054983 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1f571c4_4cc9_417d_8fe3_84cf4dba83a8.slice/crio-217a298d933f5f9099f5d21bc93d21766101c1e15a7e43d74bae3897ae0633d7 WatchSource:0}: Error finding container 217a298d933f5f9099f5d21bc93d21766101c1e15a7e43d74bae3897ae0633d7: Status 404 returned error can't find the container with id 217a298d933f5f9099f5d21bc93d21766101c1e15a7e43d74bae3897ae0633d7 Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.067426 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.103685 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-config-data" (OuterVolumeSpecName: "config-data") pod "18bbdf69-d815-49b8-a29d-8b90a8e2987f" (UID: "18bbdf69-d815-49b8-a29d-8b90a8e2987f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.141219 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.141248 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18bbdf69-d815-49b8-a29d-8b90a8e2987f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.180994 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" event={"ID":"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8","Type":"ContainerStarted","Data":"217a298d933f5f9099f5d21bc93d21766101c1e15a7e43d74bae3897ae0633d7"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.182398 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24fdb0a1-a616-4ed9-b106-f7de8952a77a","Type":"ContainerStarted","Data":"673e30768686328c5fb2074b544362dc4ffd6f4d85b7e9aebfe8cbd70cad0fc1"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.183746 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f445f5bc-kgwdh" event={"ID":"44198ae0-a1f3-4eee-bcba-4898da249e24","Type":"ContainerStarted","Data":"2d6f6f5ae38c37d3aea9477bb8464caf24729e13c84ea1c0b23ac13db2dcc455"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.188606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8020087b-01d7-425a-84fc-dd7e9278f4d2","Type":"ContainerStarted","Data":"3d9fc753a0f69200d9075b2a3622b8625d1a79d913343d3875166f2f304cebfe"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.190334 4811 generic.go:334] "Generic (PLEG): container finished" podID="db5e3d04-029e-4180-9e25-479ce12a175d" containerID="a280ccd6dff994b06b053685eb15a03ad1b35dcf3aab6ce5e682b0e3053ce9fe" exitCode=0 Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.190392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" event={"ID":"db5e3d04-029e-4180-9e25-479ce12a175d","Type":"ContainerDied","Data":"a280ccd6dff994b06b053685eb15a03ad1b35dcf3aab6ce5e682b0e3053ce9fe"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.190411 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" event={"ID":"db5e3d04-029e-4180-9e25-479ce12a175d","Type":"ContainerStarted","Data":"ca875f9deaeee61453b0b451457a6678f3d3807f7a1bfbf3e5b0dcfa9537fdd5"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201046 4811 generic.go:334] "Generic (PLEG): container finished" podID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerID="56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59" exitCode=0 Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201074 4811 generic.go:334] "Generic (PLEG): container finished" podID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerID="c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908" exitCode=0 Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerDied","Data":"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201130 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerDied","Data":"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201140 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18bbdf69-d815-49b8-a29d-8b90a8e2987f","Type":"ContainerDied","Data":"1556def253178d0e96c11d7bb3c14ac5475e77c2cb66c7001cdad596f525f50d"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201154 4811 scope.go:117] "RemoveContainer" containerID="be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.201295 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.218180 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f79c58b-j4b9c" event={"ID":"c6c93c24-92dc-4a85-8d40-862fcb47fbe3","Type":"ContainerStarted","Data":"69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.218231 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f79c58b-j4b9c" event={"ID":"c6c93c24-92dc-4a85-8d40-862fcb47fbe3","Type":"ContainerStarted","Data":"265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.218242 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f79c58b-j4b9c" event={"ID":"c6c93c24-92dc-4a85-8d40-862fcb47fbe3","Type":"ContainerStarted","Data":"2c1bfeb9bd6f25e76c1859b5d6c5cdcdd21663ac02fba64abb0aba3902be0462"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.219349 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.219426 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.222953 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" event={"ID":"0dfcebd2-4ec9-463d-9ce6-801911550f42","Type":"ContainerStarted","Data":"05c2c28439ceebfd0b7c8128891e918bece9cd2345013b731d64b1aec5dd1057"} Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.235548 4811 scope.go:117] "RemoveContainer" containerID="dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.250986 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b5f79c58b-j4b9c" podStartSLOduration=2.250954492 podStartE2EDuration="2.250954492s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:12.236091652 +0000 UTC m=+1070.165387590" watchObservedRunningTime="2026-02-16 21:14:12.250954492 +0000 UTC m=+1070.180250450" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.270124 4811 scope.go:117] "RemoveContainer" containerID="56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.271749 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.287710 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330050 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.330544 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="sg-core" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330557 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="sg-core" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.330573 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-notification-agent" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330579 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-notification-agent" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.330599 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="proxy-httpd" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330606 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="proxy-httpd" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.330614 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-central-agent" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330620 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-central-agent" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330922 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="proxy-httpd" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330935 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-notification-agent" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330948 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="ceilometer-central-agent" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.330960 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" containerName="sg-core" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.332839 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.336913 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.338032 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.346084 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.362688 4811 scope.go:117] "RemoveContainer" containerID="c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451391 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-scripts\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gp9k\" (UniqueName: \"kubernetes.io/projected/7c8f1851-630d-4db5-8f53-1edcc96e1706-kube-api-access-9gp9k\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451470 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-log-httpd\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451523 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-run-httpd\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451569 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-config-data\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.451595 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.461577 4811 scope.go:117] "RemoveContainer" containerID="be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.471134 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2\": container with ID starting with be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2 not found: ID does not exist" containerID="be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.471178 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2"} err="failed to get container status \"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2\": rpc error: code = NotFound desc = could not find container \"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2\": container with ID starting with be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.471217 4811 scope.go:117] "RemoveContainer" containerID="dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.474570 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89\": container with ID starting with dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89 not found: ID does not exist" containerID="dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.474599 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89"} err="failed to get container status \"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89\": rpc error: code = NotFound desc = could not find container \"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89\": container with ID starting with dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.474614 4811 scope.go:117] "RemoveContainer" containerID="56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.474874 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59\": container with ID starting with 56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59 not found: ID does not exist" containerID="56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.474895 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59"} err="failed to get container status \"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59\": rpc error: code = NotFound desc = could not find container \"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59\": container with ID starting with 56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.474911 4811 scope.go:117] "RemoveContainer" containerID="c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908" Feb 16 21:14:12 crc kubenswrapper[4811]: E0216 21:14:12.475698 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908\": container with ID starting with c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908 not found: ID does not exist" containerID="c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.475723 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908"} err="failed to get container status \"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908\": rpc error: code = NotFound desc = could not find container \"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908\": container with ID starting with c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.475738 4811 scope.go:117] "RemoveContainer" containerID="be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476015 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2"} err="failed to get container status \"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2\": rpc error: code = NotFound desc = could not find container \"be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2\": container with ID starting with be79136dbe2bdbb49aabeaece316378e2070175b98147962cce166a4d9efadc2 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476032 4811 scope.go:117] "RemoveContainer" containerID="dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476251 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89"} err="failed to get container status \"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89\": rpc error: code = NotFound desc = could not find container \"dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89\": container with ID starting with dc47b4a1abc025f8c6300f7fa333be0cc3c69ef33433135bc8eadee043fbbe89 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476270 4811 scope.go:117] "RemoveContainer" containerID="56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476450 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59"} err="failed to get container status \"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59\": rpc error: code = NotFound desc = could not find container \"56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59\": container with ID starting with 56c22659c3119094093ae4bc144a11c38520b8cb0093d16756bb587b1bc16d59 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476469 4811 scope.go:117] "RemoveContainer" containerID="c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.476930 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908"} err="failed to get container status \"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908\": rpc error: code = NotFound desc = could not find container \"c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908\": container with ID starting with c725e1fc61d136f0ab43406a4006c67a2c77d8f20403dcc6309729aa00f35908 not found: ID does not exist" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.508219 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.557347 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4vc\" (UniqueName: \"kubernetes.io/projected/db5e3d04-029e-4180-9e25-479ce12a175d-kube-api-access-6t4vc\") pod \"db5e3d04-029e-4180-9e25-479ce12a175d\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.557480 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-sb\") pod \"db5e3d04-029e-4180-9e25-479ce12a175d\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.557552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-nb\") pod \"db5e3d04-029e-4180-9e25-479ce12a175d\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.557590 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-svc\") pod \"db5e3d04-029e-4180-9e25-479ce12a175d\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.557613 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-config\") pod \"db5e3d04-029e-4180-9e25-479ce12a175d\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.557700 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-swift-storage-0\") pod \"db5e3d04-029e-4180-9e25-479ce12a175d\" (UID: \"db5e3d04-029e-4180-9e25-479ce12a175d\") " Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-scripts\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558106 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gp9k\" (UniqueName: \"kubernetes.io/projected/7c8f1851-630d-4db5-8f53-1edcc96e1706-kube-api-access-9gp9k\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558138 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-log-httpd\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558179 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-run-httpd\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-config-data\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.558422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.565311 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-scripts\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.565945 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-run-httpd\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.566211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-log-httpd\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.567967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.572265 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.602637 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-config-data\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.602974 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db5e3d04-029e-4180-9e25-479ce12a175d" (UID: "db5e3d04-029e-4180-9e25-479ce12a175d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.608496 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db5e3d04-029e-4180-9e25-479ce12a175d" (UID: "db5e3d04-029e-4180-9e25-479ce12a175d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.605947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gp9k\" (UniqueName: \"kubernetes.io/projected/7c8f1851-630d-4db5-8f53-1edcc96e1706-kube-api-access-9gp9k\") pod \"ceilometer-0\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.620532 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5e3d04-029e-4180-9e25-479ce12a175d-kube-api-access-6t4vc" (OuterVolumeSpecName: "kube-api-access-6t4vc") pod "db5e3d04-029e-4180-9e25-479ce12a175d" (UID: "db5e3d04-029e-4180-9e25-479ce12a175d"). InnerVolumeSpecName "kube-api-access-6t4vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.633371 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db5e3d04-029e-4180-9e25-479ce12a175d" (UID: "db5e3d04-029e-4180-9e25-479ce12a175d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.651453 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.660581 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.660614 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.660628 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t4vc\" (UniqueName: \"kubernetes.io/projected/db5e3d04-029e-4180-9e25-479ce12a175d-kube-api-access-6t4vc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.660637 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.664701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db5e3d04-029e-4180-9e25-479ce12a175d" (UID: "db5e3d04-029e-4180-9e25-479ce12a175d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.667505 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-config" (OuterVolumeSpecName: "config") pod "db5e3d04-029e-4180-9e25-479ce12a175d" (UID: "db5e3d04-029e-4180-9e25-479ce12a175d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.743976 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18bbdf69-d815-49b8-a29d-8b90a8e2987f" path="/var/lib/kubelet/pods/18bbdf69-d815-49b8-a29d-8b90a8e2987f/volumes" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.763014 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:12 crc kubenswrapper[4811]: I0216 21:14:12.763046 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db5e3d04-029e-4180-9e25-479ce12a175d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.236802 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24fdb0a1-a616-4ed9-b106-f7de8952a77a","Type":"ContainerStarted","Data":"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c"} Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.238913 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" event={"ID":"db5e3d04-029e-4180-9e25-479ce12a175d","Type":"ContainerDied","Data":"ca875f9deaeee61453b0b451457a6678f3d3807f7a1bfbf3e5b0dcfa9537fdd5"} Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.238944 4811 scope.go:117] "RemoveContainer" containerID="a280ccd6dff994b06b053685eb15a03ad1b35dcf3aab6ce5e682b0e3053ce9fe" Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.239013 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-bll6j" Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.246887 4811 generic.go:334] "Generic (PLEG): container finished" podID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerID="c5765e2b52670de282c9d5c6431131396d6f90c946033e463ec9d39a7fbb25eb" exitCode=0 Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.246998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" event={"ID":"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8","Type":"ContainerDied","Data":"c5765e2b52670de282c9d5c6431131396d6f90c946033e463ec9d39a7fbb25eb"} Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.318395 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-bll6j"] Feb 16 21:14:13 crc kubenswrapper[4811]: I0216 21:14:13.329938 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-bll6j"] Feb 16 21:14:14 crc kubenswrapper[4811]: I0216 21:14:14.114235 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:14 crc kubenswrapper[4811]: I0216 21:14:14.721540 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5e3d04-029e-4180-9e25-479ce12a175d" path="/var/lib/kubelet/pods/db5e3d04-029e-4180-9e25-479ce12a175d/volumes" Feb 16 21:14:15 crc kubenswrapper[4811]: I0216 21:14:15.345526 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:15 crc kubenswrapper[4811]: W0216 21:14:15.384837 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c8f1851_630d_4db5_8f53_1edcc96e1706.slice/crio-6c86e1d4a6a2127a3d267402899f75ad84bda7f9cdd4ce9eb38b1184d8f3ccfc WatchSource:0}: Error finding container 6c86e1d4a6a2127a3d267402899f75ad84bda7f9cdd4ce9eb38b1184d8f3ccfc: Status 404 returned error can't find the container with id 6c86e1d4a6a2127a3d267402899f75ad84bda7f9cdd4ce9eb38b1184d8f3ccfc Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.299305 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" event={"ID":"0dfcebd2-4ec9-463d-9ce6-801911550f42","Type":"ContainerStarted","Data":"a89e7d4a00238114233ae27cce1702c0c0a891dd3002ac13e5dd8f80ae876f73"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.299790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" event={"ID":"0dfcebd2-4ec9-463d-9ce6-801911550f42","Type":"ContainerStarted","Data":"9ee44481de5b476853c290245d1d148f55655c7e450e642813b79f50a01ac781"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.303780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" event={"ID":"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8","Type":"ContainerStarted","Data":"fc8bb9f355be0845136116f1c4060f71f870aeb595c37d9d537aa11a5d87a3f6"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.304490 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.307144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24fdb0a1-a616-4ed9-b106-f7de8952a77a","Type":"ContainerStarted","Data":"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.307245 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api-log" containerID="cri-o://d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c" gracePeriod=30 Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.307306 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api" containerID="cri-o://5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006" gracePeriod=30 Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.307264 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.311111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerStarted","Data":"739b7a86ad89f7167b6225ae1a8e6771de0f13bcdae6f03d582802000a85e879"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.311152 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerStarted","Data":"6c86e1d4a6a2127a3d267402899f75ad84bda7f9cdd4ce9eb38b1184d8f3ccfc"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.316681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f445f5bc-kgwdh" event={"ID":"44198ae0-a1f3-4eee-bcba-4898da249e24","Type":"ContainerStarted","Data":"7c5d8fc28aed862cc29a4f925d541b05edbbda2301d64895e04d3b810eba0374"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.316727 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f445f5bc-kgwdh" event={"ID":"44198ae0-a1f3-4eee-bcba-4898da249e24","Type":"ContainerStarted","Data":"b759e5fb88b7d9e8b9bce880da364dc681a0262998a3e3ebef09e2b2c7f06163"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.322235 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8547d7757d-rdzdb" podStartSLOduration=3.058931753 podStartE2EDuration="6.32221645s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="2026-02-16 21:14:11.350792845 +0000 UTC m=+1069.280088783" lastFinishedPulling="2026-02-16 21:14:14.614077542 +0000 UTC m=+1072.543373480" observedRunningTime="2026-02-16 21:14:16.313770564 +0000 UTC m=+1074.243066512" watchObservedRunningTime="2026-02-16 21:14:16.32221645 +0000 UTC m=+1074.251512418" Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.322710 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8020087b-01d7-425a-84fc-dd7e9278f4d2","Type":"ContainerStarted","Data":"ac16ace31c36a0e8e6df8da8adcb5a6f06913d24bbf6fd96aa71653fa22b7c36"} Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.355807 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" podStartSLOduration=6.355781457 podStartE2EDuration="6.355781457s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:16.343794931 +0000 UTC m=+1074.273090889" watchObservedRunningTime="2026-02-16 21:14:16.355781457 +0000 UTC m=+1074.285077415" Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.382101 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.382073039 podStartE2EDuration="6.382073039s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:16.366451549 +0000 UTC m=+1074.295747487" watchObservedRunningTime="2026-02-16 21:14:16.382073039 +0000 UTC m=+1074.311368977" Feb 16 21:14:16 crc kubenswrapper[4811]: I0216 21:14:16.390082 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58f445f5bc-kgwdh" podStartSLOduration=3.191584672 podStartE2EDuration="6.390065353s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="2026-02-16 21:14:11.522691567 +0000 UTC m=+1069.451987495" lastFinishedPulling="2026-02-16 21:14:14.721172238 +0000 UTC m=+1072.650468176" observedRunningTime="2026-02-16 21:14:16.387474987 +0000 UTC m=+1074.316770935" watchObservedRunningTime="2026-02-16 21:14:16.390065353 +0000 UTC m=+1074.319361291" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.111298 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176157 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-combined-ca-bundle\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176268 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24fdb0a1-a616-4ed9-b106-f7de8952a77a-etc-machine-id\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176390 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fdb0a1-a616-4ed9-b106-f7de8952a77a-logs\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176452 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2frw9\" (UniqueName: \"kubernetes.io/projected/24fdb0a1-a616-4ed9-b106-f7de8952a77a-kube-api-access-2frw9\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176475 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data-custom\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.176524 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-scripts\") pod \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\" (UID: \"24fdb0a1-a616-4ed9-b106-f7de8952a77a\") " Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.180357 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24fdb0a1-a616-4ed9-b106-f7de8952a77a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.180626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24fdb0a1-a616-4ed9-b106-f7de8952a77a-logs" (OuterVolumeSpecName: "logs") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.196422 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fdb0a1-a616-4ed9-b106-f7de8952a77a-kube-api-access-2frw9" (OuterVolumeSpecName: "kube-api-access-2frw9") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "kube-api-access-2frw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.207344 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-scripts" (OuterVolumeSpecName: "scripts") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.215851 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.229422 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.275636 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77895df746-7lfzq"] Feb 16 21:14:17 crc kubenswrapper[4811]: E0216 21:14:17.276101 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.276117 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api" Feb 16 21:14:17 crc kubenswrapper[4811]: E0216 21:14:17.276133 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5e3d04-029e-4180-9e25-479ce12a175d" containerName="init" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.276139 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5e3d04-029e-4180-9e25-479ce12a175d" containerName="init" Feb 16 21:14:17 crc kubenswrapper[4811]: E0216 21:14:17.276162 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api-log" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.276169 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api-log" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.276367 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5e3d04-029e-4180-9e25-479ce12a175d" containerName="init" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.276391 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api-log" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.276409 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerName="cinder-api" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.278668 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.278692 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24fdb0a1-a616-4ed9-b106-f7de8952a77a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.278702 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fdb0a1-a616-4ed9-b106-f7de8952a77a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.278712 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2frw9\" (UniqueName: \"kubernetes.io/projected/24fdb0a1-a616-4ed9-b106-f7de8952a77a-kube-api-access-2frw9\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.278722 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.278729 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.279067 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.284166 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data" (OuterVolumeSpecName: "config-data") pod "24fdb0a1-a616-4ed9-b106-f7de8952a77a" (UID: "24fdb0a1-a616-4ed9-b106-f7de8952a77a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.284507 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.286993 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.288853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77895df746-7lfzq"] Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.337875 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8020087b-01d7-425a-84fc-dd7e9278f4d2","Type":"ContainerStarted","Data":"c0eb9303beb9b9ff9264df6821cb7005414fa61231cb1b1117099da18b77adb0"} Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.342989 4811 generic.go:334] "Generic (PLEG): container finished" podID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerID="5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006" exitCode=0 Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.343018 4811 generic.go:334] "Generic (PLEG): container finished" podID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" containerID="d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c" exitCode=143 Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.343059 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24fdb0a1-a616-4ed9-b106-f7de8952a77a","Type":"ContainerDied","Data":"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006"} Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.343082 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24fdb0a1-a616-4ed9-b106-f7de8952a77a","Type":"ContainerDied","Data":"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c"} Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.343093 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24fdb0a1-a616-4ed9-b106-f7de8952a77a","Type":"ContainerDied","Data":"673e30768686328c5fb2074b544362dc4ffd6f4d85b7e9aebfe8cbd70cad0fc1"} Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.343107 4811 scope.go:117] "RemoveContainer" containerID="5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.343246 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.354944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerStarted","Data":"4123c2b47bef8639280abfe06dcefe985e66b445d0e9a8a7700a19b605ab5333"} Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.378492 4811 scope.go:117] "RemoveContainer" containerID="d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-public-tls-certs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380377 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6jp\" (UniqueName: \"kubernetes.io/projected/715afebd-20b0-4059-953f-aee92f9562f9-kube-api-access-9g6jp\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380446 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715afebd-20b0-4059-953f-aee92f9562f9-logs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380478 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-config-data-custom\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380539 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-internal-tls-certs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380627 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-config-data\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380664 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-combined-ca-bundle\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.380736 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fdb0a1-a616-4ed9-b106-f7de8952a77a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.382594 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.52982139 podStartE2EDuration="7.382571948s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="2026-02-16 21:14:11.873605352 +0000 UTC m=+1069.802901280" lastFinishedPulling="2026-02-16 21:14:14.7263559 +0000 UTC m=+1072.655651838" observedRunningTime="2026-02-16 21:14:17.354598674 +0000 UTC m=+1075.283894642" watchObservedRunningTime="2026-02-16 21:14:17.382571948 +0000 UTC m=+1075.311867886" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.396737 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.427285 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.436893 4811 scope.go:117] "RemoveContainer" containerID="5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006" Feb 16 21:14:17 crc kubenswrapper[4811]: E0216 21:14:17.441470 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006\": container with ID starting with 5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006 not found: ID does not exist" containerID="5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.441515 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006"} err="failed to get container status \"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006\": rpc error: code = NotFound desc = could not find container \"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006\": container with ID starting with 5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006 not found: ID does not exist" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.441542 4811 scope.go:117] "RemoveContainer" containerID="d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.443563 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.445499 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: E0216 21:14:17.446876 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c\": container with ID starting with d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c not found: ID does not exist" containerID="d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.446902 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c"} err="failed to get container status \"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c\": rpc error: code = NotFound desc = could not find container \"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c\": container with ID starting with d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c not found: ID does not exist" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.446926 4811 scope.go:117] "RemoveContainer" containerID="5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.449451 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006"} err="failed to get container status \"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006\": rpc error: code = NotFound desc = could not find container \"5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006\": container with ID starting with 5b4c62f0e942f5acd128faeeaaffb5797a24b5610915a8879b36f2a24c0f3006 not found: ID does not exist" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.449494 4811 scope.go:117] "RemoveContainer" containerID="d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.451022 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c"} err="failed to get container status \"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c\": rpc error: code = NotFound desc = could not find container \"d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c\": container with ID starting with d4dc47d2f8526a1c89f1233136a71e685eef40a9173831ab493ef07b76c1ec0c not found: ID does not exist" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.451541 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.451656 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.452028 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.452047 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.483538 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715afebd-20b0-4059-953f-aee92f9562f9-logs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.483623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-config-data-custom\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.483690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-public-tls-certs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.483869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26515dac-f971-477b-b088-1f656ddc3f62-etc-machine-id\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.483939 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-internal-tls-certs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-config-data-custom\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484057 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-scripts\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484102 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-config-data\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484212 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484233 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-combined-ca-bundle\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484287 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xxd\" (UniqueName: \"kubernetes.io/projected/26515dac-f971-477b-b088-1f656ddc3f62-kube-api-access-k6xxd\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-config-data\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484378 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26515dac-f971-477b-b088-1f656ddc3f62-logs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484415 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-public-tls-certs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.484452 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6jp\" (UniqueName: \"kubernetes.io/projected/715afebd-20b0-4059-953f-aee92f9562f9-kube-api-access-9g6jp\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.485393 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715afebd-20b0-4059-953f-aee92f9562f9-logs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.491460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-config-data\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.494951 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-config-data-custom\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.501679 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-public-tls-certs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.501773 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-internal-tls-certs\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.502244 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715afebd-20b0-4059-953f-aee92f9562f9-combined-ca-bundle\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.519177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6jp\" (UniqueName: \"kubernetes.io/projected/715afebd-20b0-4059-953f-aee92f9562f9-kube-api-access-9g6jp\") pod \"barbican-api-77895df746-7lfzq\" (UID: \"715afebd-20b0-4059-953f-aee92f9562f9\") " pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26515dac-f971-477b-b088-1f656ddc3f62-logs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588506 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-public-tls-certs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588538 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26515dac-f971-477b-b088-1f656ddc3f62-etc-machine-id\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588580 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-config-data-custom\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588604 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-scripts\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588633 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xxd\" (UniqueName: \"kubernetes.io/projected/26515dac-f971-477b-b088-1f656ddc3f62-kube-api-access-k6xxd\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.588703 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-config-data\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.592591 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-config-data\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.594663 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26515dac-f971-477b-b088-1f656ddc3f62-logs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.594728 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26515dac-f971-477b-b088-1f656ddc3f62-etc-machine-id\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.595726 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.599303 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.601979 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-config-data-custom\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.603678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-public-tls-certs\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.603717 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26515dac-f971-477b-b088-1f656ddc3f62-scripts\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.622057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xxd\" (UniqueName: \"kubernetes.io/projected/26515dac-f971-477b-b088-1f656ddc3f62-kube-api-access-k6xxd\") pod \"cinder-api-0\" (UID: \"26515dac-f971-477b-b088-1f656ddc3f62\") " pod="openstack/cinder-api-0" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.622622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:17 crc kubenswrapper[4811]: I0216 21:14:17.781286 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:14:18 crc kubenswrapper[4811]: I0216 21:14:18.136283 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77895df746-7lfzq"] Feb 16 21:14:18 crc kubenswrapper[4811]: I0216 21:14:18.256490 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:14:18 crc kubenswrapper[4811]: W0216 21:14:18.260801 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26515dac_f971_477b_b088_1f656ddc3f62.slice/crio-e3771f93bd2f92b9ff4d3611648699adbff249eac311b46446e5f201a802a2d7 WatchSource:0}: Error finding container e3771f93bd2f92b9ff4d3611648699adbff249eac311b46446e5f201a802a2d7: Status 404 returned error can't find the container with id e3771f93bd2f92b9ff4d3611648699adbff249eac311b46446e5f201a802a2d7 Feb 16 21:14:18 crc kubenswrapper[4811]: I0216 21:14:18.381858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26515dac-f971-477b-b088-1f656ddc3f62","Type":"ContainerStarted","Data":"e3771f93bd2f92b9ff4d3611648699adbff249eac311b46446e5f201a802a2d7"} Feb 16 21:14:18 crc kubenswrapper[4811]: I0216 21:14:18.392381 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerStarted","Data":"2e5cf43f4fab9cf34ea911a08fc98814045502156f1eedfde384825e654f48ac"} Feb 16 21:14:18 crc kubenswrapper[4811]: I0216 21:14:18.397296 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77895df746-7lfzq" event={"ID":"715afebd-20b0-4059-953f-aee92f9562f9","Type":"ContainerStarted","Data":"862e9dd98f5178968039a3769602fbf97ae81d7b9fc292d78934a2b143d9c3d2"} Feb 16 21:14:18 crc kubenswrapper[4811]: I0216 21:14:18.746599 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fdb0a1-a616-4ed9-b106-f7de8952a77a" path="/var/lib/kubelet/pods/24fdb0a1-a616-4ed9-b106-f7de8952a77a/volumes" Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.411566 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77895df746-7lfzq" event={"ID":"715afebd-20b0-4059-953f-aee92f9562f9","Type":"ContainerStarted","Data":"a5a1ed6ae074772125b07f5bc75d2846dbaba91453c4706af1c568446534b9f5"} Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.411835 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77895df746-7lfzq" event={"ID":"715afebd-20b0-4059-953f-aee92f9562f9","Type":"ContainerStarted","Data":"8e10d5403668bf4aa0319625927d0b82f30dd04af160ab800cdf72ae79205894"} Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.412377 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.412482 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.413960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26515dac-f971-477b-b088-1f656ddc3f62","Type":"ContainerStarted","Data":"c73339ebc2957ec18222a6f47db0ba88737476e5e1d32e92d780df71ac8c5713"} Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.416465 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerStarted","Data":"14870886c0c4cdec5a84c500928d1f8fd68663a4d06bb779aa686b4c919452b8"} Feb 16 21:14:19 crc kubenswrapper[4811]: I0216 21:14:19.446260 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77895df746-7lfzq" podStartSLOduration=2.446039563 podStartE2EDuration="2.446039563s" podCreationTimestamp="2026-02-16 21:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:19.442135363 +0000 UTC m=+1077.371431381" watchObservedRunningTime="2026-02-16 21:14:19.446039563 +0000 UTC m=+1077.375335501" Feb 16 21:14:20 crc kubenswrapper[4811]: I0216 21:14:20.424861 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26515dac-f971-477b-b088-1f656ddc3f62","Type":"ContainerStarted","Data":"5b576717930988db7103e846af319dcace1bd8e841886004930014b63e7a748c"} Feb 16 21:14:20 crc kubenswrapper[4811]: I0216 21:14:20.426104 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:14:20 crc kubenswrapper[4811]: I0216 21:14:20.426122 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 21:14:20 crc kubenswrapper[4811]: I0216 21:14:20.452151 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.452131395 podStartE2EDuration="3.452131395s" podCreationTimestamp="2026-02-16 21:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:20.445126006 +0000 UTC m=+1078.374421944" watchObservedRunningTime="2026-02-16 21:14:20.452131395 +0000 UTC m=+1078.381427333" Feb 16 21:14:20 crc kubenswrapper[4811]: I0216 21:14:20.479821 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.16424009 podStartE2EDuration="8.479799062s" podCreationTimestamp="2026-02-16 21:14:12 +0000 UTC" firstStartedPulling="2026-02-16 21:14:15.388950567 +0000 UTC m=+1073.318246505" lastFinishedPulling="2026-02-16 21:14:18.704509519 +0000 UTC m=+1076.633805477" observedRunningTime="2026-02-16 21:14:20.472419854 +0000 UTC m=+1078.401715812" watchObservedRunningTime="2026-02-16 21:14:20.479799062 +0000 UTC m=+1078.409095010" Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.295104 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.320489 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.397039 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4d766"] Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.397277 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-4d766" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerName="dnsmasq-dns" containerID="cri-o://5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65" gracePeriod=10 Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.594655 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.656659 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:21 crc kubenswrapper[4811]: E0216 21:14:21.704898 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:14:21 crc kubenswrapper[4811]: I0216 21:14:21.937441 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.006522 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-config\") pod \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.006576 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-nb\") pod \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.006605 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-swift-storage-0\") pod \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.006666 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-sb\") pod \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.006754 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pfvk\" (UniqueName: \"kubernetes.io/projected/405249e2-47a2-46d7-b5db-4bfb1ce2c477-kube-api-access-4pfvk\") pod \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.006789 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-svc\") pod \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\" (UID: \"405249e2-47a2-46d7-b5db-4bfb1ce2c477\") " Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.046635 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405249e2-47a2-46d7-b5db-4bfb1ce2c477-kube-api-access-4pfvk" (OuterVolumeSpecName: "kube-api-access-4pfvk") pod "405249e2-47a2-46d7-b5db-4bfb1ce2c477" (UID: "405249e2-47a2-46d7-b5db-4bfb1ce2c477"). InnerVolumeSpecName "kube-api-access-4pfvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.109831 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pfvk\" (UniqueName: \"kubernetes.io/projected/405249e2-47a2-46d7-b5db-4bfb1ce2c477-kube-api-access-4pfvk\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.159220 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "405249e2-47a2-46d7-b5db-4bfb1ce2c477" (UID: "405249e2-47a2-46d7-b5db-4bfb1ce2c477"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.182745 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "405249e2-47a2-46d7-b5db-4bfb1ce2c477" (UID: "405249e2-47a2-46d7-b5db-4bfb1ce2c477"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.203262 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-config" (OuterVolumeSpecName: "config") pod "405249e2-47a2-46d7-b5db-4bfb1ce2c477" (UID: "405249e2-47a2-46d7-b5db-4bfb1ce2c477"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.204297 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "405249e2-47a2-46d7-b5db-4bfb1ce2c477" (UID: "405249e2-47a2-46d7-b5db-4bfb1ce2c477"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.204824 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "405249e2-47a2-46d7-b5db-4bfb1ce2c477" (UID: "405249e2-47a2-46d7-b5db-4bfb1ce2c477"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.212470 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.212531 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.212550 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.212564 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.212597 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/405249e2-47a2-46d7-b5db-4bfb1ce2c477-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.213670 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.435539 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b9874789f-2tq4q"] Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.436147 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b9874789f-2tq4q" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-api" containerID="cri-o://f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada" gracePeriod=30 Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.436563 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b9874789f-2tq4q" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-httpd" containerID="cri-o://f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114" gracePeriod=30 Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.475271 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76ccfcd95-9jxjj"] Feb 16 21:14:22 crc kubenswrapper[4811]: E0216 21:14:22.475821 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerName="dnsmasq-dns" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.475848 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerName="dnsmasq-dns" Feb 16 21:14:22 crc kubenswrapper[4811]: E0216 21:14:22.475865 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerName="init" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.475876 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerName="init" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.476147 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerName="dnsmasq-dns" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.478831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.493565 4811 generic.go:334] "Generic (PLEG): container finished" podID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" containerID="5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65" exitCode=0 Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.493836 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="cinder-scheduler" containerID="cri-o://ac16ace31c36a0e8e6df8da8adcb5a6f06913d24bbf6fd96aa71653fa22b7c36" gracePeriod=30 Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.494229 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4d766" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.504688 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4d766" event={"ID":"405249e2-47a2-46d7-b5db-4bfb1ce2c477","Type":"ContainerDied","Data":"5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65"} Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.504737 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4d766" event={"ID":"405249e2-47a2-46d7-b5db-4bfb1ce2c477","Type":"ContainerDied","Data":"1e2f41c6beedee19ba0a367dabad552a209fb6818dcbbcaef9f31ebf44ab1f94"} Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.504758 4811 scope.go:117] "RemoveContainer" containerID="5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.504957 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="probe" containerID="cri-o://c0eb9303beb9b9ff9264df6821cb7005414fa61231cb1b1117099da18b77adb0" gracePeriod=30 Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.511780 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517474 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-combined-ca-bundle\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517516 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-internal-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66xg\" (UniqueName: \"kubernetes.io/projected/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-kube-api-access-f66xg\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517623 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-config\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-ovndb-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517719 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-httpd-config\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.517821 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-public-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.519013 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76ccfcd95-9jxjj"] Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.588782 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.611645 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4d766"] Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.619615 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-combined-ca-bundle\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.619953 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-internal-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.620075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66xg\" (UniqueName: \"kubernetes.io/projected/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-kube-api-access-f66xg\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.620178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-config\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.620313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-ovndb-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.620411 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-httpd-config\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.620536 4811 scope.go:117] "RemoveContainer" containerID="c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.620543 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-public-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.629277 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4d766"] Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.631742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-ovndb-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.631864 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-httpd-config\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.631964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-combined-ca-bundle\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.632743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-public-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.638258 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-config\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.638947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66xg\" (UniqueName: \"kubernetes.io/projected/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-kube-api-access-f66xg\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.640648 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae-internal-tls-certs\") pod \"neutron-76ccfcd95-9jxjj\" (UID: \"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae\") " pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.717477 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="405249e2-47a2-46d7-b5db-4bfb1ce2c477" path="/var/lib/kubelet/pods/405249e2-47a2-46d7-b5db-4bfb1ce2c477/volumes" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.731705 4811 scope.go:117] "RemoveContainer" containerID="5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65" Feb 16 21:14:22 crc kubenswrapper[4811]: E0216 21:14:22.732096 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65\": container with ID starting with 5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65 not found: ID does not exist" containerID="5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.732121 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65"} err="failed to get container status \"5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65\": rpc error: code = NotFound desc = could not find container \"5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65\": container with ID starting with 5e410f0115b61d53e645d5e282c1b7ac5836b0274ff964cb6e5db3ce79d08e65 not found: ID does not exist" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.732143 4811 scope.go:117] "RemoveContainer" containerID="c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb" Feb 16 21:14:22 crc kubenswrapper[4811]: E0216 21:14:22.732462 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb\": container with ID starting with c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb not found: ID does not exist" containerID="c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.732482 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb"} err="failed to get container status \"c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb\": rpc error: code = NotFound desc = could not find container \"c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb\": container with ID starting with c330b4a38a47bf2c090a09882de386107781df5dc6084aa4cf6713b6af3bdabb not found: ID does not exist" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.807647 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:22 crc kubenswrapper[4811]: I0216 21:14:22.887571 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:23 crc kubenswrapper[4811]: I0216 21:14:23.481219 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76ccfcd95-9jxjj"] Feb 16 21:14:23 crc kubenswrapper[4811]: I0216 21:14:23.505742 4811 generic.go:334] "Generic (PLEG): container finished" podID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerID="c0eb9303beb9b9ff9264df6821cb7005414fa61231cb1b1117099da18b77adb0" exitCode=0 Feb 16 21:14:23 crc kubenswrapper[4811]: I0216 21:14:23.505813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8020087b-01d7-425a-84fc-dd7e9278f4d2","Type":"ContainerDied","Data":"c0eb9303beb9b9ff9264df6821cb7005414fa61231cb1b1117099da18b77adb0"} Feb 16 21:14:23 crc kubenswrapper[4811]: I0216 21:14:23.512155 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerID="f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114" exitCode=0 Feb 16 21:14:23 crc kubenswrapper[4811]: I0216 21:14:23.512236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9874789f-2tq4q" event={"ID":"ef08d7ef-0bd9-4126-bd7b-0d46b646be40","Type":"ContainerDied","Data":"f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114"} Feb 16 21:14:23 crc kubenswrapper[4811]: I0216 21:14:23.519015 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccfcd95-9jxjj" event={"ID":"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae","Type":"ContainerStarted","Data":"f1dc7b5f3177f4a51421e778d11569c659115f73e062da3ad5526a8bffcfe662"} Feb 16 21:14:24 crc kubenswrapper[4811]: I0216 21:14:24.192416 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5b9874789f-2tq4q" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": dial tcp 10.217.0.168:9696: connect: connection refused" Feb 16 21:14:24 crc kubenswrapper[4811]: I0216 21:14:24.531599 4811 generic.go:334] "Generic (PLEG): container finished" podID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerID="ac16ace31c36a0e8e6df8da8adcb5a6f06913d24bbf6fd96aa71653fa22b7c36" exitCode=0 Feb 16 21:14:24 crc kubenswrapper[4811]: I0216 21:14:24.531851 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8020087b-01d7-425a-84fc-dd7e9278f4d2","Type":"ContainerDied","Data":"ac16ace31c36a0e8e6df8da8adcb5a6f06913d24bbf6fd96aa71653fa22b7c36"} Feb 16 21:14:24 crc kubenswrapper[4811]: I0216 21:14:24.533923 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccfcd95-9jxjj" event={"ID":"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae","Type":"ContainerStarted","Data":"35016bbdb1958d2e776d525fe236be9192912cca5be803478cc9e4d39551a865"} Feb 16 21:14:24 crc kubenswrapper[4811]: I0216 21:14:24.533947 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccfcd95-9jxjj" event={"ID":"fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae","Type":"ContainerStarted","Data":"4c829e3f1779939d30d0d5cd0625c469b91dae700d4bc5e44def74c79c8263da"} Feb 16 21:14:24 crc kubenswrapper[4811]: I0216 21:14:24.535039 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.012356 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.037419 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76ccfcd95-9jxjj" podStartSLOduration=3.037403355 podStartE2EDuration="3.037403355s" podCreationTimestamp="2026-02-16 21:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:24.569619064 +0000 UTC m=+1082.498915002" watchObservedRunningTime="2026-02-16 21:14:25.037403355 +0000 UTC m=+1082.966699303" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.114839 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8020087b-01d7-425a-84fc-dd7e9278f4d2-etc-machine-id\") pod \"8020087b-01d7-425a-84fc-dd7e9278f4d2\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.114964 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxnx\" (UniqueName: \"kubernetes.io/projected/8020087b-01d7-425a-84fc-dd7e9278f4d2-kube-api-access-2qxnx\") pod \"8020087b-01d7-425a-84fc-dd7e9278f4d2\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.114998 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-scripts\") pod \"8020087b-01d7-425a-84fc-dd7e9278f4d2\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.115070 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data-custom\") pod \"8020087b-01d7-425a-84fc-dd7e9278f4d2\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.115126 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data\") pod \"8020087b-01d7-425a-84fc-dd7e9278f4d2\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.115178 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-combined-ca-bundle\") pod \"8020087b-01d7-425a-84fc-dd7e9278f4d2\" (UID: \"8020087b-01d7-425a-84fc-dd7e9278f4d2\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.118384 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8020087b-01d7-425a-84fc-dd7e9278f4d2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8020087b-01d7-425a-84fc-dd7e9278f4d2" (UID: "8020087b-01d7-425a-84fc-dd7e9278f4d2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.125383 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8020087b-01d7-425a-84fc-dd7e9278f4d2" (UID: "8020087b-01d7-425a-84fc-dd7e9278f4d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.127559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8020087b-01d7-425a-84fc-dd7e9278f4d2-kube-api-access-2qxnx" (OuterVolumeSpecName: "kube-api-access-2qxnx") pod "8020087b-01d7-425a-84fc-dd7e9278f4d2" (UID: "8020087b-01d7-425a-84fc-dd7e9278f4d2"). InnerVolumeSpecName "kube-api-access-2qxnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.127568 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-scripts" (OuterVolumeSpecName: "scripts") pod "8020087b-01d7-425a-84fc-dd7e9278f4d2" (UID: "8020087b-01d7-425a-84fc-dd7e9278f4d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.208472 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8020087b-01d7-425a-84fc-dd7e9278f4d2" (UID: "8020087b-01d7-425a-84fc-dd7e9278f4d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.218954 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.218978 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8020087b-01d7-425a-84fc-dd7e9278f4d2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.219040 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxnx\" (UniqueName: \"kubernetes.io/projected/8020087b-01d7-425a-84fc-dd7e9278f4d2-kube-api-access-2qxnx\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.219054 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.219063 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.246416 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data" (OuterVolumeSpecName: "config-data") pod "8020087b-01d7-425a-84fc-dd7e9278f4d2" (UID: "8020087b-01d7-425a-84fc-dd7e9278f4d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.320428 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8020087b-01d7-425a-84fc-dd7e9278f4d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.379389 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528189 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-internal-tls-certs\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-httpd-config\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528304 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw9xc\" (UniqueName: \"kubernetes.io/projected/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-kube-api-access-cw9xc\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-combined-ca-bundle\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528430 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-public-tls-certs\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528484 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-config\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.528502 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-ovndb-tls-certs\") pod \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\" (UID: \"ef08d7ef-0bd9-4126-bd7b-0d46b646be40\") " Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.533251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.541950 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-kube-api-access-cw9xc" (OuterVolumeSpecName: "kube-api-access-cw9xc") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "kube-api-access-cw9xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.553338 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerID="f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada" exitCode=0 Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.553543 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9874789f-2tq4q" event={"ID":"ef08d7ef-0bd9-4126-bd7b-0d46b646be40","Type":"ContainerDied","Data":"f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada"} Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.553582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9874789f-2tq4q" event={"ID":"ef08d7ef-0bd9-4126-bd7b-0d46b646be40","Type":"ContainerDied","Data":"1f2babe0f72266a5a14e5ba1cae8cc3a4d0b5a1a64c9fa744767e4f2e201a33d"} Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.553605 4811 scope.go:117] "RemoveContainer" containerID="f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.553745 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9874789f-2tq4q" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.563472 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.563539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8020087b-01d7-425a-84fc-dd7e9278f4d2","Type":"ContainerDied","Data":"3d9fc753a0f69200d9075b2a3622b8625d1a79d913343d3875166f2f304cebfe"} Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.585451 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.607254 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.616285 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-config" (OuterVolumeSpecName: "config") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.618676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.623099 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ef08d7ef-0bd9-4126-bd7b-0d46b646be40" (UID: "ef08d7ef-0bd9-4126-bd7b-0d46b646be40"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631115 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631150 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631164 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw9xc\" (UniqueName: \"kubernetes.io/projected/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-kube-api-access-cw9xc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631233 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631249 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631262 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.631274 4811 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef08d7ef-0bd9-4126-bd7b-0d46b646be40-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.709922 4811 scope.go:117] "RemoveContainer" containerID="f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.724078 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.737777 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.743052 4811 scope.go:117] "RemoveContainer" containerID="f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114" Feb 16 21:14:25 crc kubenswrapper[4811]: E0216 21:14:25.743746 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114\": container with ID starting with f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114 not found: ID does not exist" containerID="f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.743774 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114"} err="failed to get container status \"f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114\": rpc error: code = NotFound desc = could not find container \"f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114\": container with ID starting with f5f7c537b885c9273be01ba6a3b6ccdc3b0e04f82ed60207dba7e4dea389a114 not found: ID does not exist" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.743801 4811 scope.go:117] "RemoveContainer" containerID="f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada" Feb 16 21:14:25 crc kubenswrapper[4811]: E0216 21:14:25.744038 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada\": container with ID starting with f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada not found: ID does not exist" containerID="f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.744086 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada"} err="failed to get container status \"f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada\": rpc error: code = NotFound desc = could not find container \"f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada\": container with ID starting with f9d2524235b8f9166abd10a7283ee29249335c9a330d0ac0fe541fce4826fada not found: ID does not exist" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.744106 4811 scope.go:117] "RemoveContainer" containerID="c0eb9303beb9b9ff9264df6821cb7005414fa61231cb1b1117099da18b77adb0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.755931 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:25 crc kubenswrapper[4811]: E0216 21:14:25.756467 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="cinder-scheduler" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756492 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="cinder-scheduler" Feb 16 21:14:25 crc kubenswrapper[4811]: E0216 21:14:25.756534 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="probe" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756543 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="probe" Feb 16 21:14:25 crc kubenswrapper[4811]: E0216 21:14:25.756566 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-api" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756576 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-api" Feb 16 21:14:25 crc kubenswrapper[4811]: E0216 21:14:25.756588 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-httpd" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756596 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-httpd" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756820 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-httpd" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756841 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" containerName="neutron-api" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756867 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="cinder-scheduler" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.756891 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" containerName="probe" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.758528 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.764086 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.774255 4811 scope.go:117] "RemoveContainer" containerID="ac16ace31c36a0e8e6df8da8adcb5a6f06913d24bbf6fd96aa71653fa22b7c36" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.778791 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.834834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.836728 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.836866 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46kjr\" (UniqueName: \"kubernetes.io/projected/d14d491d-bfdb-47df-92b3-e57f805e415f-kube-api-access-46kjr\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.836940 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.836967 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.837093 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d14d491d-bfdb-47df-92b3-e57f805e415f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.901547 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b9874789f-2tq4q"] Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.910133 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b9874789f-2tq4q"] Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d14d491d-bfdb-47df-92b3-e57f805e415f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939520 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939753 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46kjr\" (UniqueName: \"kubernetes.io/projected/d14d491d-bfdb-47df-92b3-e57f805e415f-kube-api-access-46kjr\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939854 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939926 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.939287 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d14d491d-bfdb-47df-92b3-e57f805e415f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.945072 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.945172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.945550 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.949206 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d14d491d-bfdb-47df-92b3-e57f805e415f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:25 crc kubenswrapper[4811]: I0216 21:14:25.956510 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46kjr\" (UniqueName: \"kubernetes.io/projected/d14d491d-bfdb-47df-92b3-e57f805e415f-kube-api-access-46kjr\") pod \"cinder-scheduler-0\" (UID: \"d14d491d-bfdb-47df-92b3-e57f805e415f\") " pod="openstack/cinder-scheduler-0" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.073332 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.102610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.463688 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5db7fb44c6-5zcls"] Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.465257 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.477873 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5db7fb44c6-5zcls"] Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.558721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-public-tls-certs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.561789 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-scripts\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.562000 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjc8n\" (UniqueName: \"kubernetes.io/projected/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-kube-api-access-qjc8n\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.562172 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-config-data\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.562431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-logs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.562587 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-combined-ca-bundle\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.562818 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-internal-tls-certs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: W0216 21:14:26.613183 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd14d491d_bfdb_47df_92b3_e57f805e415f.slice/crio-a72dea2ad69b1dbae6568d32fa87d0e359de753dbf23a3b7068f5f63b4c8ee22 WatchSource:0}: Error finding container a72dea2ad69b1dbae6568d32fa87d0e359de753dbf23a3b7068f5f63b4c8ee22: Status 404 returned error can't find the container with id a72dea2ad69b1dbae6568d32fa87d0e359de753dbf23a3b7068f5f63b4c8ee22 Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.619362 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.668836 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-combined-ca-bundle\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.668900 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-internal-tls-certs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.668962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-public-tls-certs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.669037 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-scripts\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.669053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjc8n\" (UniqueName: \"kubernetes.io/projected/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-kube-api-access-qjc8n\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.669099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-config-data\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.669157 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-logs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.669484 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-logs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.672557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-combined-ca-bundle\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.676118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-internal-tls-certs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.677797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-config-data\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.682555 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-scripts\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.685419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-public-tls-certs\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.696333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjc8n\" (UniqueName: \"kubernetes.io/projected/f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e-kube-api-access-qjc8n\") pod \"placement-5db7fb44c6-5zcls\" (UID: \"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e\") " pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.720703 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8020087b-01d7-425a-84fc-dd7e9278f4d2" path="/var/lib/kubelet/pods/8020087b-01d7-425a-84fc-dd7e9278f4d2/volumes" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.721836 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef08d7ef-0bd9-4126-bd7b-0d46b646be40" path="/var/lib/kubelet/pods/ef08d7ef-0bd9-4126-bd7b-0d46b646be40/volumes" Feb 16 21:14:26 crc kubenswrapper[4811]: I0216 21:14:26.792394 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:27 crc kubenswrapper[4811]: I0216 21:14:27.249499 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5db7fb44c6-5zcls"] Feb 16 21:14:27 crc kubenswrapper[4811]: W0216 21:14:27.250659 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d045d5_6ae5_4c3e_aefb_c1fc3e90f64e.slice/crio-cb4bb0fcacdd5e2f272ab7ba552c157938e975d366d7cfc0dc185e90d7ba90e0 WatchSource:0}: Error finding container cb4bb0fcacdd5e2f272ab7ba552c157938e975d366d7cfc0dc185e90d7ba90e0: Status 404 returned error can't find the container with id cb4bb0fcacdd5e2f272ab7ba552c157938e975d366d7cfc0dc185e90d7ba90e0 Feb 16 21:14:27 crc kubenswrapper[4811]: I0216 21:14:27.609606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d14d491d-bfdb-47df-92b3-e57f805e415f","Type":"ContainerStarted","Data":"fbc25aaa216c44a47984494f1118229dc551faeec11b103ad6dad87b22fbaa92"} Feb 16 21:14:27 crc kubenswrapper[4811]: I0216 21:14:27.609882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d14d491d-bfdb-47df-92b3-e57f805e415f","Type":"ContainerStarted","Data":"a72dea2ad69b1dbae6568d32fa87d0e359de753dbf23a3b7068f5f63b4c8ee22"} Feb 16 21:14:27 crc kubenswrapper[4811]: I0216 21:14:27.611552 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5db7fb44c6-5zcls" event={"ID":"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e","Type":"ContainerStarted","Data":"da3a4d9fa346ca874dc72f596c3e90787877dddfb530461a67b0d2d17669f0ad"} Feb 16 21:14:27 crc kubenswrapper[4811]: I0216 21:14:27.611576 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5db7fb44c6-5zcls" event={"ID":"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e","Type":"ContainerStarted","Data":"cb4bb0fcacdd5e2f272ab7ba552c157938e975d366d7cfc0dc185e90d7ba90e0"} Feb 16 21:14:28 crc kubenswrapper[4811]: I0216 21:14:28.623924 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d14d491d-bfdb-47df-92b3-e57f805e415f","Type":"ContainerStarted","Data":"d988bb51cb939a2d29865aaca7effe95f52a009bba1503deae16a28d60d34612"} Feb 16 21:14:28 crc kubenswrapper[4811]: I0216 21:14:28.625967 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5db7fb44c6-5zcls" event={"ID":"f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e","Type":"ContainerStarted","Data":"8d15102c6e088b12fb85ca031c0195ad5562da792fc6733f293bdf23b6b9b203"} Feb 16 21:14:28 crc kubenswrapper[4811]: I0216 21:14:28.626096 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:28 crc kubenswrapper[4811]: I0216 21:14:28.660477 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.660450643 podStartE2EDuration="3.660450643s" podCreationTimestamp="2026-02-16 21:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:28.642682999 +0000 UTC m=+1086.571978927" watchObservedRunningTime="2026-02-16 21:14:28.660450643 +0000 UTC m=+1086.589746621" Feb 16 21:14:28 crc kubenswrapper[4811]: I0216 21:14:28.679954 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5db7fb44c6-5zcls" podStartSLOduration=2.67992658 podStartE2EDuration="2.67992658s" podCreationTimestamp="2026-02-16 21:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:28.669414492 +0000 UTC m=+1086.598710450" watchObservedRunningTime="2026-02-16 21:14:28.67992658 +0000 UTC m=+1086.609222558" Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.118842 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.227060 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77895df746-7lfzq" Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.298811 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b5f79c58b-j4b9c"] Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.299095 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b5f79c58b-j4b9c" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api-log" containerID="cri-o://265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5" gracePeriod=30 Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.299623 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b5f79c58b-j4b9c" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api" containerID="cri-o://69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6" gracePeriod=30 Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.641600 4811 generic.go:334] "Generic (PLEG): container finished" podID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerID="265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5" exitCode=143 Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.641707 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f79c58b-j4b9c" event={"ID":"c6c93c24-92dc-4a85-8d40-862fcb47fbe3","Type":"ContainerDied","Data":"265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5"} Feb 16 21:14:29 crc kubenswrapper[4811]: I0216 21:14:29.642033 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:30 crc kubenswrapper[4811]: I0216 21:14:30.099873 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.067237 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bf9c6cdb6-77vqw" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.074384 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.608869 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.610857 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.613105 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.613459 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.615467 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-68pmq" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.631855 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.711885 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8d5\" (UniqueName: \"kubernetes.io/projected/48d3b16f-0a4b-42bc-9443-19ce343df00a-kube-api-access-wt8d5\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.712018 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3b16f-0a4b-42bc-9443-19ce343df00a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.712057 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48d3b16f-0a4b-42bc-9443-19ce343df00a-openstack-config\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.712128 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48d3b16f-0a4b-42bc-9443-19ce343df00a-openstack-config-secret\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.814302 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48d3b16f-0a4b-42bc-9443-19ce343df00a-openstack-config\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.814457 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48d3b16f-0a4b-42bc-9443-19ce343df00a-openstack-config-secret\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.814582 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8d5\" (UniqueName: \"kubernetes.io/projected/48d3b16f-0a4b-42bc-9443-19ce343df00a-kube-api-access-wt8d5\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.814711 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3b16f-0a4b-42bc-9443-19ce343df00a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.815849 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/48d3b16f-0a4b-42bc-9443-19ce343df00a-openstack-config\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.820780 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/48d3b16f-0a4b-42bc-9443-19ce343df00a-openstack-config-secret\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.821735 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3b16f-0a4b-42bc-9443-19ce343df00a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.837746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8d5\" (UniqueName: \"kubernetes.io/projected/48d3b16f-0a4b-42bc-9443-19ce343df00a-kube-api-access-wt8d5\") pod \"openstackclient\" (UID: \"48d3b16f-0a4b-42bc-9443-19ce343df00a\") " pod="openstack/openstackclient" Feb 16 21:14:31 crc kubenswrapper[4811]: I0216 21:14:31.929764 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:14:32 crc kubenswrapper[4811]: I0216 21:14:32.427590 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:14:32 crc kubenswrapper[4811]: I0216 21:14:32.694322 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"48d3b16f-0a4b-42bc-9443-19ce343df00a","Type":"ContainerStarted","Data":"08b9951fcedd18222b5dc948d9d19b537e6a2eb88062bd47add21765fbc193ec"} Feb 16 21:14:32 crc kubenswrapper[4811]: E0216 21:14:32.726729 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:14:32 crc kubenswrapper[4811]: I0216 21:14:32.748343 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b5f79c58b-j4b9c" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": read tcp 10.217.0.2:37052->10.217.0.176:9311: read: connection reset by peer" Feb 16 21:14:32 crc kubenswrapper[4811]: I0216 21:14:32.748838 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b5f79c58b-j4b9c" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": read tcp 10.217.0.2:37046->10.217.0.176:9311: read: connection reset by peer" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.194972 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.371370 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data-custom\") pod \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.371442 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-logs\") pod \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.371672 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwssf\" (UniqueName: \"kubernetes.io/projected/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-kube-api-access-nwssf\") pod \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.371791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data\") pod \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.371828 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-combined-ca-bundle\") pod \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\" (UID: \"c6c93c24-92dc-4a85-8d40-862fcb47fbe3\") " Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.374439 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-logs" (OuterVolumeSpecName: "logs") pod "c6c93c24-92dc-4a85-8d40-862fcb47fbe3" (UID: "c6c93c24-92dc-4a85-8d40-862fcb47fbe3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.381360 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6c93c24-92dc-4a85-8d40-862fcb47fbe3" (UID: "c6c93c24-92dc-4a85-8d40-862fcb47fbe3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.398344 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-kube-api-access-nwssf" (OuterVolumeSpecName: "kube-api-access-nwssf") pod "c6c93c24-92dc-4a85-8d40-862fcb47fbe3" (UID: "c6c93c24-92dc-4a85-8d40-862fcb47fbe3"). InnerVolumeSpecName "kube-api-access-nwssf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.452429 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data" (OuterVolumeSpecName: "config-data") pod "c6c93c24-92dc-4a85-8d40-862fcb47fbe3" (UID: "c6c93c24-92dc-4a85-8d40-862fcb47fbe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.468883 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c93c24-92dc-4a85-8d40-862fcb47fbe3" (UID: "c6c93c24-92dc-4a85-8d40-862fcb47fbe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.474484 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.474527 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.474541 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwssf\" (UniqueName: \"kubernetes.io/projected/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-kube-api-access-nwssf\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.474591 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.474604 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c93c24-92dc-4a85-8d40-862fcb47fbe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.722884 4811 generic.go:334] "Generic (PLEG): container finished" podID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerID="69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6" exitCode=0 Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.722930 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f79c58b-j4b9c" event={"ID":"c6c93c24-92dc-4a85-8d40-862fcb47fbe3","Type":"ContainerDied","Data":"69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6"} Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.722956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f79c58b-j4b9c" event={"ID":"c6c93c24-92dc-4a85-8d40-862fcb47fbe3","Type":"ContainerDied","Data":"2c1bfeb9bd6f25e76c1859b5d6c5cdcdd21663ac02fba64abb0aba3902be0462"} Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.722977 4811 scope.go:117] "RemoveContainer" containerID="69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.723093 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f79c58b-j4b9c" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.767494 4811 scope.go:117] "RemoveContainer" containerID="265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.770354 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b5f79c58b-j4b9c"] Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.784426 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b5f79c58b-j4b9c"] Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.788073 4811 scope.go:117] "RemoveContainer" containerID="69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6" Feb 16 21:14:33 crc kubenswrapper[4811]: E0216 21:14:33.788697 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6\": container with ID starting with 69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6 not found: ID does not exist" containerID="69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.788744 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6"} err="failed to get container status \"69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6\": rpc error: code = NotFound desc = could not find container \"69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6\": container with ID starting with 69ad880a0a307911f5fe1d1ae01c82340d364d6682a880a824fa705c40f36ff6 not found: ID does not exist" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.788774 4811 scope.go:117] "RemoveContainer" containerID="265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5" Feb 16 21:14:33 crc kubenswrapper[4811]: E0216 21:14:33.789085 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5\": container with ID starting with 265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5 not found: ID does not exist" containerID="265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5" Feb 16 21:14:33 crc kubenswrapper[4811]: I0216 21:14:33.789109 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5"} err="failed to get container status \"265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5\": rpc error: code = NotFound desc = could not find container \"265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5\": container with ID starting with 265106bc24e0ddcb19134a1054682c9962275ab9fa5b104c3b09c7477e07b9e5 not found: ID does not exist" Feb 16 21:14:34 crc kubenswrapper[4811]: I0216 21:14:34.716022 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" path="/var/lib/kubelet/pods/c6c93c24-92dc-4a85-8d40-862fcb47fbe3/volumes" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.882296 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-69cc95b6b9-n22wz"] Feb 16 21:14:35 crc kubenswrapper[4811]: E0216 21:14:35.882931 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.882942 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api" Feb 16 21:14:35 crc kubenswrapper[4811]: E0216 21:14:35.882974 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api-log" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.882980 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api-log" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.883150 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.883178 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c93c24-92dc-4a85-8d40-862fcb47fbe3" containerName="barbican-api-log" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.884445 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.886106 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.886341 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.886418 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:14:35 crc kubenswrapper[4811]: I0216 21:14:35.902180 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-69cc95b6b9-n22wz"] Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.002584 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.003028 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-central-agent" containerID="cri-o://739b7a86ad89f7167b6225ae1a8e6771de0f13bcdae6f03d582802000a85e879" gracePeriod=30 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.003361 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-notification-agent" containerID="cri-o://4123c2b47bef8639280abfe06dcefe985e66b445d0e9a8a7700a19b605ab5333" gracePeriod=30 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.003434 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="proxy-httpd" containerID="cri-o://14870886c0c4cdec5a84c500928d1f8fd68663a4d06bb779aa686b4c919452b8" gracePeriod=30 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.003832 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="sg-core" containerID="cri-o://2e5cf43f4fab9cf34ea911a08fc98814045502156f1eedfde384825e654f48ac" gracePeriod=30 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.021919 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d638a7-6781-47f5-af27-712f046ec70a-etc-swift\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.021999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-combined-ca-bundle\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.022315 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6d638a7-6781-47f5-af27-712f046ec70a-run-httpd\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.022958 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6d638a7-6781-47f5-af27-712f046ec70a-log-httpd\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.023026 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dwm\" (UniqueName: \"kubernetes.io/projected/a6d638a7-6781-47f5-af27-712f046ec70a-kube-api-access-j5dwm\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.023087 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-config-data\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.023243 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-public-tls-certs\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.023280 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-internal-tls-certs\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.103914 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": read tcp 10.217.0.2:52642->10.217.0.180:3000: read: connection reset by peer" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.124763 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d638a7-6781-47f5-af27-712f046ec70a-etc-swift\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.124853 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-combined-ca-bundle\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.124917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6d638a7-6781-47f5-af27-712f046ec70a-run-httpd\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.124954 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6d638a7-6781-47f5-af27-712f046ec70a-log-httpd\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.124987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dwm\" (UniqueName: \"kubernetes.io/projected/a6d638a7-6781-47f5-af27-712f046ec70a-kube-api-access-j5dwm\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.125020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-config-data\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.125081 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-public-tls-certs\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.125113 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-internal-tls-certs\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.130732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6d638a7-6781-47f5-af27-712f046ec70a-log-httpd\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.130992 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6d638a7-6781-47f5-af27-712f046ec70a-run-httpd\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.132075 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-internal-tls-certs\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.132458 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d638a7-6781-47f5-af27-712f046ec70a-etc-swift\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.133357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-combined-ca-bundle\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.133868 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-public-tls-certs\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.134133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d638a7-6781-47f5-af27-712f046ec70a-config-data\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.143619 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dwm\" (UniqueName: \"kubernetes.io/projected/a6d638a7-6781-47f5-af27-712f046ec70a-kube-api-access-j5dwm\") pod \"swift-proxy-69cc95b6b9-n22wz\" (UID: \"a6d638a7-6781-47f5-af27-712f046ec70a\") " pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.235083 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.316096 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.807845 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-69cc95b6b9-n22wz"] Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.817600 4811 generic.go:334] "Generic (PLEG): container finished" podID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerID="14870886c0c4cdec5a84c500928d1f8fd68663a4d06bb779aa686b4c919452b8" exitCode=0 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.817633 4811 generic.go:334] "Generic (PLEG): container finished" podID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerID="2e5cf43f4fab9cf34ea911a08fc98814045502156f1eedfde384825e654f48ac" exitCode=2 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.817642 4811 generic.go:334] "Generic (PLEG): container finished" podID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerID="739b7a86ad89f7167b6225ae1a8e6771de0f13bcdae6f03d582802000a85e879" exitCode=0 Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.817665 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerDied","Data":"14870886c0c4cdec5a84c500928d1f8fd68663a4d06bb779aa686b4c919452b8"} Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.817693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerDied","Data":"2e5cf43f4fab9cf34ea911a08fc98814045502156f1eedfde384825e654f48ac"} Feb 16 21:14:36 crc kubenswrapper[4811]: I0216 21:14:36.817704 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerDied","Data":"739b7a86ad89f7167b6225ae1a8e6771de0f13bcdae6f03d582802000a85e879"} Feb 16 21:14:39 crc kubenswrapper[4811]: I0216 21:14:39.857927 4811 generic.go:334] "Generic (PLEG): container finished" podID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerID="4123c2b47bef8639280abfe06dcefe985e66b445d0e9a8a7700a19b605ab5333" exitCode=0 Feb 16 21:14:39 crc kubenswrapper[4811]: I0216 21:14:39.858151 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerDied","Data":"4123c2b47bef8639280abfe06dcefe985e66b445d0e9a8a7700a19b605ab5333"} Feb 16 21:14:41 crc kubenswrapper[4811]: I0216 21:14:41.893380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69cc95b6b9-n22wz" event={"ID":"a6d638a7-6781-47f5-af27-712f046ec70a","Type":"ContainerStarted","Data":"8328ba213324ea5860b508ec7c031f682f0974d5c5813ad3f08b97962a829031"} Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.157082 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264223 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-scripts\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264641 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-sg-core-conf-yaml\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264689 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-run-httpd\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264745 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-combined-ca-bundle\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264881 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gp9k\" (UniqueName: \"kubernetes.io/projected/7c8f1851-630d-4db5-8f53-1edcc96e1706-kube-api-access-9gp9k\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264900 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-log-httpd\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.264939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-config-data\") pod \"7c8f1851-630d-4db5-8f53-1edcc96e1706\" (UID: \"7c8f1851-630d-4db5-8f53-1edcc96e1706\") " Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.265654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.265635 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.268537 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8f1851-630d-4db5-8f53-1edcc96e1706-kube-api-access-9gp9k" (OuterVolumeSpecName: "kube-api-access-9gp9k") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "kube-api-access-9gp9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.268706 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-scripts" (OuterVolumeSpecName: "scripts") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.298337 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.359703 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.367569 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.367604 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gp9k\" (UniqueName: \"kubernetes.io/projected/7c8f1851-630d-4db5-8f53-1edcc96e1706-kube-api-access-9gp9k\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.367615 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.367624 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.367633 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.367641 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c8f1851-630d-4db5-8f53-1edcc96e1706-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.373873 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-config-data" (OuterVolumeSpecName: "config-data") pod "7c8f1851-630d-4db5-8f53-1edcc96e1706" (UID: "7c8f1851-630d-4db5-8f53-1edcc96e1706"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.469334 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8f1851-630d-4db5-8f53-1edcc96e1706-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.908112 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.909269 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c8f1851-630d-4db5-8f53-1edcc96e1706","Type":"ContainerDied","Data":"6c86e1d4a6a2127a3d267402899f75ad84bda7f9cdd4ce9eb38b1184d8f3ccfc"} Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.909301 4811 scope.go:117] "RemoveContainer" containerID="14870886c0c4cdec5a84c500928d1f8fd68663a4d06bb779aa686b4c919452b8" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.911601 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"48d3b16f-0a4b-42bc-9443-19ce343df00a","Type":"ContainerStarted","Data":"acc39086b5775e4fc20784c115b93eed33b659e9447b444d3ccdc9af1878fef3"} Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.915956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69cc95b6b9-n22wz" event={"ID":"a6d638a7-6781-47f5-af27-712f046ec70a","Type":"ContainerStarted","Data":"79b5e71c2a5d96c8ffd96ef236c2b72eac45ef92a9e6477dcea009ce59489dd3"} Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.916001 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69cc95b6b9-n22wz" event={"ID":"a6d638a7-6781-47f5-af27-712f046ec70a","Type":"ContainerStarted","Data":"73bf693191ea0d0e9e373a7278e26c2c6f6b64f29fef31e7fcc457201693a360"} Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.916396 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.916564 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.935538 4811 scope.go:117] "RemoveContainer" containerID="2e5cf43f4fab9cf34ea911a08fc98814045502156f1eedfde384825e654f48ac" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.939070 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.447749773 podStartE2EDuration="11.939047964s" podCreationTimestamp="2026-02-16 21:14:31 +0000 UTC" firstStartedPulling="2026-02-16 21:14:32.419103884 +0000 UTC m=+1090.348399822" lastFinishedPulling="2026-02-16 21:14:41.910402075 +0000 UTC m=+1099.839698013" observedRunningTime="2026-02-16 21:14:42.929394028 +0000 UTC m=+1100.858689966" watchObservedRunningTime="2026-02-16 21:14:42.939047964 +0000 UTC m=+1100.868343912" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.963689 4811 scope.go:117] "RemoveContainer" containerID="4123c2b47bef8639280abfe06dcefe985e66b445d0e9a8a7700a19b605ab5333" Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.968021 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:42 crc kubenswrapper[4811]: I0216 21:14:42.990083 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.004796 4811 scope.go:117] "RemoveContainer" containerID="739b7a86ad89f7167b6225ae1a8e6771de0f13bcdae6f03d582802000a85e879" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.015111 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:43 crc kubenswrapper[4811]: E0216 21:14:43.015614 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="proxy-httpd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.015648 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="proxy-httpd" Feb 16 21:14:43 crc kubenswrapper[4811]: E0216 21:14:43.015670 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-notification-agent" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.015679 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-notification-agent" Feb 16 21:14:43 crc kubenswrapper[4811]: E0216 21:14:43.015691 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-central-agent" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.015703 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-central-agent" Feb 16 21:14:43 crc kubenswrapper[4811]: E0216 21:14:43.015727 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="sg-core" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.015736 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="sg-core" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.016028 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-central-agent" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.016051 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="ceilometer-notification-agent" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.016072 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="sg-core" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.016083 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" containerName="proxy-httpd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.049985 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.056301 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.056957 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.069844 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-69cc95b6b9-n22wz" podStartSLOduration=8.069822855 podStartE2EDuration="8.069822855s" podCreationTimestamp="2026-02-16 21:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:42.985141052 +0000 UTC m=+1100.914437010" watchObservedRunningTime="2026-02-16 21:14:43.069822855 +0000 UTC m=+1100.999118793" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.125248 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gn59h"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.126510 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.141394 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.154355 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gn59h"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.167861 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-278vh"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.169293 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.185928 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-da63-account-create-update-cxjcz"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.187338 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.190274 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200349 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-config-data\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200500 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-log-httpd\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200528 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200631 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v28t\" (UniqueName: \"kubernetes.io/projected/28abda8f-8607-4e77-85d4-ab50171c709a-kube-api-access-5v28t\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200746 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-scripts\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200793 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.200813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-run-httpd\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.205881 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-da63-account-create-update-cxjcz"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.214292 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-278vh"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.283035 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6hccd"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.284988 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.293339 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1049-account-create-update-w7jrz"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.295115 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.298503 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.302242 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcgb\" (UniqueName: \"kubernetes.io/projected/81372a4d-8d39-4692-8c7f-ed243fcf3822-kube-api-access-wrcgb\") pod \"nova-api-da63-account-create-update-cxjcz\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.302289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v28t\" (UniqueName: \"kubernetes.io/projected/28abda8f-8607-4e77-85d4-ab50171c709a-kube-api-access-5v28t\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.302336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54b96249-bc72-45e5-9d7e-481deb69113b-operator-scripts\") pod \"nova-api-db-create-gn59h\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.303420 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6hccd"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304486 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-scripts\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304528 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2bfn\" (UniqueName: \"kubernetes.io/projected/54b96249-bc72-45e5-9d7e-481deb69113b-kube-api-access-v2bfn\") pod \"nova-api-db-create-gn59h\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304561 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304575 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-run-httpd\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304616 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-config-data\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304699 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthwg\" (UniqueName: \"kubernetes.io/projected/155cab82-ef10-4ce4-8116-f3f80558987d-kube-api-access-gthwg\") pod \"nova-cell0-db-create-278vh\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304772 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-log-httpd\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304795 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81372a4d-8d39-4692-8c7f-ed243fcf3822-operator-scripts\") pod \"nova-api-da63-account-create-update-cxjcz\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.304856 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155cab82-ef10-4ce4-8116-f3f80558987d-operator-scripts\") pod \"nova-cell0-db-create-278vh\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.305271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-run-httpd\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.309262 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-log-httpd\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.318295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-scripts\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.318292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.319318 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.323639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v28t\" (UniqueName: \"kubernetes.io/projected/28abda8f-8607-4e77-85d4-ab50171c709a-kube-api-access-5v28t\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.333327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-config-data\") pod \"ceilometer-0\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.347071 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1049-account-create-update-w7jrz"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.386897 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.406378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81372a4d-8d39-4692-8c7f-ed243fcf3822-operator-scripts\") pod \"nova-api-da63-account-create-update-cxjcz\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.406415 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155cab82-ef10-4ce4-8116-f3f80558987d-operator-scripts\") pod \"nova-cell0-db-create-278vh\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407116 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155cab82-ef10-4ce4-8116-f3f80558987d-operator-scripts\") pod \"nova-cell0-db-create-278vh\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81372a4d-8d39-4692-8c7f-ed243fcf3822-operator-scripts\") pod \"nova-api-da63-account-create-update-cxjcz\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ms4f\" (UniqueName: \"kubernetes.io/projected/35741874-d08a-4633-8b5f-438b7a3f6d12-kube-api-access-6ms4f\") pod \"nova-cell1-db-create-6hccd\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrcgb\" (UniqueName: \"kubernetes.io/projected/81372a4d-8d39-4692-8c7f-ed243fcf3822-kube-api-access-wrcgb\") pod \"nova-api-da63-account-create-update-cxjcz\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35741874-d08a-4633-8b5f-438b7a3f6d12-operator-scripts\") pod \"nova-cell1-db-create-6hccd\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407457 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54b96249-bc72-45e5-9d7e-481deb69113b-operator-scripts\") pod \"nova-api-db-create-gn59h\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407495 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2bfn\" (UniqueName: \"kubernetes.io/projected/54b96249-bc72-45e5-9d7e-481deb69113b-kube-api-access-v2bfn\") pod \"nova-api-db-create-gn59h\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7zd\" (UniqueName: \"kubernetes.io/projected/41ab0943-b07e-4c02-89df-e4768d30f129-kube-api-access-7k7zd\") pod \"nova-cell0-1049-account-create-update-w7jrz\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407626 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthwg\" (UniqueName: \"kubernetes.io/projected/155cab82-ef10-4ce4-8116-f3f80558987d-kube-api-access-gthwg\") pod \"nova-cell0-db-create-278vh\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407683 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ab0943-b07e-4c02-89df-e4768d30f129-operator-scripts\") pod \"nova-cell0-1049-account-create-update-w7jrz\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.407943 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54b96249-bc72-45e5-9d7e-481deb69113b-operator-scripts\") pod \"nova-api-db-create-gn59h\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.424501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthwg\" (UniqueName: \"kubernetes.io/projected/155cab82-ef10-4ce4-8116-f3f80558987d-kube-api-access-gthwg\") pod \"nova-cell0-db-create-278vh\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.427350 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2bfn\" (UniqueName: \"kubernetes.io/projected/54b96249-bc72-45e5-9d7e-481deb69113b-kube-api-access-v2bfn\") pod \"nova-api-db-create-gn59h\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.428100 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrcgb\" (UniqueName: \"kubernetes.io/projected/81372a4d-8d39-4692-8c7f-ed243fcf3822-kube-api-access-wrcgb\") pod \"nova-api-da63-account-create-update-cxjcz\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.445410 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.495286 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.500099 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4bec-account-create-update-zg28l"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.501533 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.509250 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.510042 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.511181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35741874-d08a-4633-8b5f-438b7a3f6d12-operator-scripts\") pod \"nova-cell1-db-create-6hccd\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.511312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7zd\" (UniqueName: \"kubernetes.io/projected/41ab0943-b07e-4c02-89df-e4768d30f129-kube-api-access-7k7zd\") pod \"nova-cell0-1049-account-create-update-w7jrz\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.511366 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ab0943-b07e-4c02-89df-e4768d30f129-operator-scripts\") pod \"nova-cell0-1049-account-create-update-w7jrz\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.511392 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b583703-c772-4c1c-895d-6c410f34c439-operator-scripts\") pod \"nova-cell1-4bec-account-create-update-zg28l\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.511444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9bxr\" (UniqueName: \"kubernetes.io/projected/0b583703-c772-4c1c-895d-6c410f34c439-kube-api-access-m9bxr\") pod \"nova-cell1-4bec-account-create-update-zg28l\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.511470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ms4f\" (UniqueName: \"kubernetes.io/projected/35741874-d08a-4633-8b5f-438b7a3f6d12-kube-api-access-6ms4f\") pod \"nova-cell1-db-create-6hccd\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.512338 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ab0943-b07e-4c02-89df-e4768d30f129-operator-scripts\") pod \"nova-cell0-1049-account-create-update-w7jrz\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.513946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35741874-d08a-4633-8b5f-438b7a3f6d12-operator-scripts\") pod \"nova-cell1-db-create-6hccd\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.533559 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4bec-account-create-update-zg28l"] Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.542960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ms4f\" (UniqueName: \"kubernetes.io/projected/35741874-d08a-4633-8b5f-438b7a3f6d12-kube-api-access-6ms4f\") pod \"nova-cell1-db-create-6hccd\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.543927 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7zd\" (UniqueName: \"kubernetes.io/projected/41ab0943-b07e-4c02-89df-e4768d30f129-kube-api-access-7k7zd\") pod \"nova-cell0-1049-account-create-update-w7jrz\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.608131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.615510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b583703-c772-4c1c-895d-6c410f34c439-operator-scripts\") pod \"nova-cell1-4bec-account-create-update-zg28l\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.615611 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9bxr\" (UniqueName: \"kubernetes.io/projected/0b583703-c772-4c1c-895d-6c410f34c439-kube-api-access-m9bxr\") pod \"nova-cell1-4bec-account-create-update-zg28l\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.616534 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b583703-c772-4c1c-895d-6c410f34c439-operator-scripts\") pod \"nova-cell1-4bec-account-create-update-zg28l\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.631029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9bxr\" (UniqueName: \"kubernetes.io/projected/0b583703-c772-4c1c-895d-6c410f34c439-kube-api-access-m9bxr\") pod \"nova-cell1-4bec-account-create-update-zg28l\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.813318 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.840711 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:43 crc kubenswrapper[4811]: I0216 21:14:43.926398 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.345891 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-278vh"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.392087 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gn59h"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.561301 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-da63-account-create-update-cxjcz"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.658092 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6hccd"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.680146 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1049-account-create-update-w7jrz"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.719904 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8f1851-630d-4db5-8f53-1edcc96e1706" path="/var/lib/kubelet/pods/7c8f1851-630d-4db5-8f53-1edcc96e1706/volumes" Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.789652 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4bec-account-create-update-zg28l"] Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.975997 4811 generic.go:334] "Generic (PLEG): container finished" podID="54b96249-bc72-45e5-9d7e-481deb69113b" containerID="ca65d8083953c4efb197939751bda8fc23493af4ad3eb1a4a58ee78b70c9c7f2" exitCode=0 Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.976089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gn59h" event={"ID":"54b96249-bc72-45e5-9d7e-481deb69113b","Type":"ContainerDied","Data":"ca65d8083953c4efb197939751bda8fc23493af4ad3eb1a4a58ee78b70c9c7f2"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.976315 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gn59h" event={"ID":"54b96249-bc72-45e5-9d7e-481deb69113b","Type":"ContainerStarted","Data":"f74f1f682fdbc33ed2e62bc3f5f1a9c351807d6645bf90cb5f27592db69b8f58"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.978099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6hccd" event={"ID":"35741874-d08a-4633-8b5f-438b7a3f6d12","Type":"ContainerStarted","Data":"7565c81ff33c4719e2fc870db567b4d6718efa65497d5dd589245dd50f84bf92"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.978144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6hccd" event={"ID":"35741874-d08a-4633-8b5f-438b7a3f6d12","Type":"ContainerStarted","Data":"e0da0b0c79d73323d8c06271030f79c56137d6f0331383b481e67d48c43e0c63"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.979969 4811 generic.go:334] "Generic (PLEG): container finished" podID="155cab82-ef10-4ce4-8116-f3f80558987d" containerID="01a36ee76f1df2142cbc58fc40848184fafa36cdf8a55e27653abe852043214b" exitCode=0 Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.980049 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-278vh" event={"ID":"155cab82-ef10-4ce4-8116-f3f80558987d","Type":"ContainerDied","Data":"01a36ee76f1df2142cbc58fc40848184fafa36cdf8a55e27653abe852043214b"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.980073 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-278vh" event={"ID":"155cab82-ef10-4ce4-8116-f3f80558987d","Type":"ContainerStarted","Data":"af668cef7d01bae9abe53a90a310100b6dd6bee58ef2d609d431093435692c21"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.981600 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-da63-account-create-update-cxjcz" event={"ID":"81372a4d-8d39-4692-8c7f-ed243fcf3822","Type":"ContainerStarted","Data":"3da90750b694462c35ced5df17c872be656b88d7528113bdda19eb659240aff1"} Feb 16 21:14:44 crc kubenswrapper[4811]: I0216 21:14:44.981705 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-da63-account-create-update-cxjcz" event={"ID":"81372a4d-8d39-4692-8c7f-ed243fcf3822","Type":"ContainerStarted","Data":"59a653437f765092423d02053c84d07ab5ebd98ae8a79d24d1e7d74b406d597a"} Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.001440 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" event={"ID":"0b583703-c772-4c1c-895d-6c410f34c439","Type":"ContainerStarted","Data":"bb914051cdcabe7a9ae4bd9f1352b9aaa14be0138639da605ad2a8c1f6e1c5d7"} Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.005546 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" event={"ID":"41ab0943-b07e-4c02-89df-e4768d30f129","Type":"ContainerStarted","Data":"6cfb73a10f65b939a340f75eb824896b2811af43eb83dea282cf7d468d013a71"} Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.005590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" event={"ID":"41ab0943-b07e-4c02-89df-e4768d30f129","Type":"ContainerStarted","Data":"f486a7dd86f54d4862663a5b958a76871e0dccf1c3fbc8db4aab06841a56d8f0"} Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.017319 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerStarted","Data":"6dd7ff72a64e573211cd03956e1e245e6b60ba617515e7e6149ec48d89339f85"} Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.017379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerStarted","Data":"e6e6ae8a6a2596442ff08ee42811d791ecb37d91459aee3cfb459997f29aa841"} Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.020718 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-6hccd" podStartSLOduration=2.020698173 podStartE2EDuration="2.020698173s" podCreationTimestamp="2026-02-16 21:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:45.010423961 +0000 UTC m=+1102.939719899" watchObservedRunningTime="2026-02-16 21:14:45.020698173 +0000 UTC m=+1102.949994101" Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.032988 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-da63-account-create-update-cxjcz" podStartSLOduration=2.032971537 podStartE2EDuration="2.032971537s" podCreationTimestamp="2026-02-16 21:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:45.031861478 +0000 UTC m=+1102.961157416" watchObservedRunningTime="2026-02-16 21:14:45.032971537 +0000 UTC m=+1102.962267475" Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.115639 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" podStartSLOduration=2.115622768 podStartE2EDuration="2.115622768s" podCreationTimestamp="2026-02-16 21:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:45.07695247 +0000 UTC m=+1103.006248418" watchObservedRunningTime="2026-02-16 21:14:45.115622768 +0000 UTC m=+1103.044918706" Feb 16 21:14:45 crc kubenswrapper[4811]: I0216 21:14:45.682306 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:45 crc kubenswrapper[4811]: E0216 21:14:45.704422 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.036987 4811 generic.go:334] "Generic (PLEG): container finished" podID="81372a4d-8d39-4692-8c7f-ed243fcf3822" containerID="3da90750b694462c35ced5df17c872be656b88d7528113bdda19eb659240aff1" exitCode=0 Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.037428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-da63-account-create-update-cxjcz" event={"ID":"81372a4d-8d39-4692-8c7f-ed243fcf3822","Type":"ContainerDied","Data":"3da90750b694462c35ced5df17c872be656b88d7528113bdda19eb659240aff1"} Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.039820 4811 generic.go:334] "Generic (PLEG): container finished" podID="0b583703-c772-4c1c-895d-6c410f34c439" containerID="dced4156982ac92d8984986e268eb973bbacaa7f38e3e7131cd4194d05f8fa99" exitCode=0 Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.039876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" event={"ID":"0b583703-c772-4c1c-895d-6c410f34c439","Type":"ContainerDied","Data":"dced4156982ac92d8984986e268eb973bbacaa7f38e3e7131cd4194d05f8fa99"} Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.041944 4811 generic.go:334] "Generic (PLEG): container finished" podID="41ab0943-b07e-4c02-89df-e4768d30f129" containerID="6cfb73a10f65b939a340f75eb824896b2811af43eb83dea282cf7d468d013a71" exitCode=0 Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.042191 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" event={"ID":"41ab0943-b07e-4c02-89df-e4768d30f129","Type":"ContainerDied","Data":"6cfb73a10f65b939a340f75eb824896b2811af43eb83dea282cf7d468d013a71"} Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.048568 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerStarted","Data":"22e42a6660413f158e6590b5ff4b4d4e7bb05829329b7abc71383a539c5a63cd"} Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.052039 4811 generic.go:334] "Generic (PLEG): container finished" podID="35741874-d08a-4633-8b5f-438b7a3f6d12" containerID="7565c81ff33c4719e2fc870db567b4d6718efa65497d5dd589245dd50f84bf92" exitCode=0 Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.052478 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6hccd" event={"ID":"35741874-d08a-4633-8b5f-438b7a3f6d12","Type":"ContainerDied","Data":"7565c81ff33c4719e2fc870db567b4d6718efa65497d5dd589245dd50f84bf92"} Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.667207 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.714647 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54b96249-bc72-45e5-9d7e-481deb69113b-operator-scripts\") pod \"54b96249-bc72-45e5-9d7e-481deb69113b\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.714711 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2bfn\" (UniqueName: \"kubernetes.io/projected/54b96249-bc72-45e5-9d7e-481deb69113b-kube-api-access-v2bfn\") pod \"54b96249-bc72-45e5-9d7e-481deb69113b\" (UID: \"54b96249-bc72-45e5-9d7e-481deb69113b\") " Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.715409 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b96249-bc72-45e5-9d7e-481deb69113b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54b96249-bc72-45e5-9d7e-481deb69113b" (UID: "54b96249-bc72-45e5-9d7e-481deb69113b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.715952 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54b96249-bc72-45e5-9d7e-481deb69113b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.718540 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.722062 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b96249-bc72-45e5-9d7e-481deb69113b-kube-api-access-v2bfn" (OuterVolumeSpecName: "kube-api-access-v2bfn") pod "54b96249-bc72-45e5-9d7e-481deb69113b" (UID: "54b96249-bc72-45e5-9d7e-481deb69113b"). InnerVolumeSpecName "kube-api-access-v2bfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.817824 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155cab82-ef10-4ce4-8116-f3f80558987d-operator-scripts\") pod \"155cab82-ef10-4ce4-8116-f3f80558987d\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.817869 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gthwg\" (UniqueName: \"kubernetes.io/projected/155cab82-ef10-4ce4-8116-f3f80558987d-kube-api-access-gthwg\") pod \"155cab82-ef10-4ce4-8116-f3f80558987d\" (UID: \"155cab82-ef10-4ce4-8116-f3f80558987d\") " Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.818392 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2bfn\" (UniqueName: \"kubernetes.io/projected/54b96249-bc72-45e5-9d7e-481deb69113b-kube-api-access-v2bfn\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.818581 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155cab82-ef10-4ce4-8116-f3f80558987d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "155cab82-ef10-4ce4-8116-f3f80558987d" (UID: "155cab82-ef10-4ce4-8116-f3f80558987d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.824483 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155cab82-ef10-4ce4-8116-f3f80558987d-kube-api-access-gthwg" (OuterVolumeSpecName: "kube-api-access-gthwg") pod "155cab82-ef10-4ce4-8116-f3f80558987d" (UID: "155cab82-ef10-4ce4-8116-f3f80558987d"). InnerVolumeSpecName "kube-api-access-gthwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.920105 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155cab82-ef10-4ce4-8116-f3f80558987d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:46 crc kubenswrapper[4811]: I0216 21:14:46.920140 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gthwg\" (UniqueName: \"kubernetes.io/projected/155cab82-ef10-4ce4-8116-f3f80558987d-kube-api-access-gthwg\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.069487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerStarted","Data":"ac659e6135f2024beb4cdbe96cf7d40ce7760d91454336670aa36eae385eb2cb"} Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.072495 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gn59h" event={"ID":"54b96249-bc72-45e5-9d7e-481deb69113b","Type":"ContainerDied","Data":"f74f1f682fdbc33ed2e62bc3f5f1a9c351807d6645bf90cb5f27592db69b8f58"} Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.072528 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74f1f682fdbc33ed2e62bc3f5f1a9c351807d6645bf90cb5f27592db69b8f58" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.072571 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gn59h" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.075314 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-278vh" event={"ID":"155cab82-ef10-4ce4-8116-f3f80558987d","Type":"ContainerDied","Data":"af668cef7d01bae9abe53a90a310100b6dd6bee58ef2d609d431093435692c21"} Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.075430 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af668cef7d01bae9abe53a90a310100b6dd6bee58ef2d609d431093435692c21" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.075487 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-278vh" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.587667 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.648499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ms4f\" (UniqueName: \"kubernetes.io/projected/35741874-d08a-4633-8b5f-438b7a3f6d12-kube-api-access-6ms4f\") pod \"35741874-d08a-4633-8b5f-438b7a3f6d12\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.648745 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35741874-d08a-4633-8b5f-438b7a3f6d12-operator-scripts\") pod \"35741874-d08a-4633-8b5f-438b7a3f6d12\" (UID: \"35741874-d08a-4633-8b5f-438b7a3f6d12\") " Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.650087 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35741874-d08a-4633-8b5f-438b7a3f6d12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35741874-d08a-4633-8b5f-438b7a3f6d12" (UID: "35741874-d08a-4633-8b5f-438b7a3f6d12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.658709 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35741874-d08a-4633-8b5f-438b7a3f6d12-kube-api-access-6ms4f" (OuterVolumeSpecName: "kube-api-access-6ms4f") pod "35741874-d08a-4633-8b5f-438b7a3f6d12" (UID: "35741874-d08a-4633-8b5f-438b7a3f6d12"). InnerVolumeSpecName "kube-api-access-6ms4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.752135 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35741874-d08a-4633-8b5f-438b7a3f6d12-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:47 crc kubenswrapper[4811]: I0216 21:14:47.752181 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ms4f\" (UniqueName: \"kubernetes.io/projected/35741874-d08a-4633-8b5f-438b7a3f6d12-kube-api-access-6ms4f\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.086689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6hccd" event={"ID":"35741874-d08a-4633-8b5f-438b7a3f6d12","Type":"ContainerDied","Data":"e0da0b0c79d73323d8c06271030f79c56137d6f0331383b481e67d48c43e0c63"} Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.086733 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0da0b0c79d73323d8c06271030f79c56137d6f0331383b481e67d48c43e0c63" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.086768 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6hccd" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.364071 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.364402 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.556612 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.564330 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.569980 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.667970 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrcgb\" (UniqueName: \"kubernetes.io/projected/81372a4d-8d39-4692-8c7f-ed243fcf3822-kube-api-access-wrcgb\") pod \"81372a4d-8d39-4692-8c7f-ed243fcf3822\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b583703-c772-4c1c-895d-6c410f34c439-operator-scripts\") pod \"0b583703-c772-4c1c-895d-6c410f34c439\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668035 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9bxr\" (UniqueName: \"kubernetes.io/projected/0b583703-c772-4c1c-895d-6c410f34c439-kube-api-access-m9bxr\") pod \"0b583703-c772-4c1c-895d-6c410f34c439\" (UID: \"0b583703-c772-4c1c-895d-6c410f34c439\") " Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k7zd\" (UniqueName: \"kubernetes.io/projected/41ab0943-b07e-4c02-89df-e4768d30f129-kube-api-access-7k7zd\") pod \"41ab0943-b07e-4c02-89df-e4768d30f129\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668310 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ab0943-b07e-4c02-89df-e4768d30f129-operator-scripts\") pod \"41ab0943-b07e-4c02-89df-e4768d30f129\" (UID: \"41ab0943-b07e-4c02-89df-e4768d30f129\") " Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668385 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81372a4d-8d39-4692-8c7f-ed243fcf3822-operator-scripts\") pod \"81372a4d-8d39-4692-8c7f-ed243fcf3822\" (UID: \"81372a4d-8d39-4692-8c7f-ed243fcf3822\") " Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668517 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b583703-c772-4c1c-895d-6c410f34c439-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b583703-c772-4c1c-895d-6c410f34c439" (UID: "0b583703-c772-4c1c-895d-6c410f34c439"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ab0943-b07e-4c02-89df-e4768d30f129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41ab0943-b07e-4c02-89df-e4768d30f129" (UID: "41ab0943-b07e-4c02-89df-e4768d30f129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668860 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81372a4d-8d39-4692-8c7f-ed243fcf3822-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81372a4d-8d39-4692-8c7f-ed243fcf3822" (UID: "81372a4d-8d39-4692-8c7f-ed243fcf3822"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668870 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b583703-c772-4c1c-895d-6c410f34c439-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.668919 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ab0943-b07e-4c02-89df-e4768d30f129-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.673449 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81372a4d-8d39-4692-8c7f-ed243fcf3822-kube-api-access-wrcgb" (OuterVolumeSpecName: "kube-api-access-wrcgb") pod "81372a4d-8d39-4692-8c7f-ed243fcf3822" (UID: "81372a4d-8d39-4692-8c7f-ed243fcf3822"). InnerVolumeSpecName "kube-api-access-wrcgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.674408 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b583703-c772-4c1c-895d-6c410f34c439-kube-api-access-m9bxr" (OuterVolumeSpecName: "kube-api-access-m9bxr") pod "0b583703-c772-4c1c-895d-6c410f34c439" (UID: "0b583703-c772-4c1c-895d-6c410f34c439"). InnerVolumeSpecName "kube-api-access-m9bxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.676765 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ab0943-b07e-4c02-89df-e4768d30f129-kube-api-access-7k7zd" (OuterVolumeSpecName: "kube-api-access-7k7zd") pod "41ab0943-b07e-4c02-89df-e4768d30f129" (UID: "41ab0943-b07e-4c02-89df-e4768d30f129"). InnerVolumeSpecName "kube-api-access-7k7zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.771109 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k7zd\" (UniqueName: \"kubernetes.io/projected/41ab0943-b07e-4c02-89df-e4768d30f129-kube-api-access-7k7zd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.771141 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81372a4d-8d39-4692-8c7f-ed243fcf3822-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.771150 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrcgb\" (UniqueName: \"kubernetes.io/projected/81372a4d-8d39-4692-8c7f-ed243fcf3822-kube-api-access-wrcgb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:48 crc kubenswrapper[4811]: I0216 21:14:48.771160 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9bxr\" (UniqueName: \"kubernetes.io/projected/0b583703-c772-4c1c-895d-6c410f34c439-kube-api-access-m9bxr\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.102811 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-da63-account-create-update-cxjcz" Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.102640 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-da63-account-create-update-cxjcz" event={"ID":"81372a4d-8d39-4692-8c7f-ed243fcf3822","Type":"ContainerDied","Data":"59a653437f765092423d02053c84d07ab5ebd98ae8a79d24d1e7d74b406d597a"} Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.102894 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59a653437f765092423d02053c84d07ab5ebd98ae8a79d24d1e7d74b406d597a" Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.104603 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.104830 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4bec-account-create-update-zg28l" event={"ID":"0b583703-c772-4c1c-895d-6c410f34c439","Type":"ContainerDied","Data":"bb914051cdcabe7a9ae4bd9f1352b9aaa14be0138639da605ad2a8c1f6e1c5d7"} Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.104862 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb914051cdcabe7a9ae4bd9f1352b9aaa14be0138639da605ad2a8c1f6e1c5d7" Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.108098 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" event={"ID":"41ab0943-b07e-4c02-89df-e4768d30f129","Type":"ContainerDied","Data":"f486a7dd86f54d4862663a5b958a76871e0dccf1c3fbc8db4aab06841a56d8f0"} Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.108136 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f486a7dd86f54d4862663a5b958a76871e0dccf1c3fbc8db4aab06841a56d8f0" Feb 16 21:14:49 crc kubenswrapper[4811]: I0216 21:14:49.108243 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1049-account-create-update-w7jrz" Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.126626 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerStarted","Data":"1b67498397efda23989c5ad9ff1328c369c7fe3142af38d1d41a9f38ae7aa197"} Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.126946 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.126952 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-notification-agent" containerID="cri-o://22e42a6660413f158e6590b5ff4b4d4e7bb05829329b7abc71383a539c5a63cd" gracePeriod=30 Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.127010 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="proxy-httpd" containerID="cri-o://1b67498397efda23989c5ad9ff1328c369c7fe3142af38d1d41a9f38ae7aa197" gracePeriod=30 Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.126953 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="sg-core" containerID="cri-o://ac659e6135f2024beb4cdbe96cf7d40ce7760d91454336670aa36eae385eb2cb" gracePeriod=30 Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.126978 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-central-agent" containerID="cri-o://6dd7ff72a64e573211cd03956e1e245e6b60ba617515e7e6149ec48d89339f85" gracePeriod=30 Feb 16 21:14:50 crc kubenswrapper[4811]: I0216 21:14:50.181725 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.051633133 podStartE2EDuration="8.18170339s" podCreationTimestamp="2026-02-16 21:14:42 +0000 UTC" firstStartedPulling="2026-02-16 21:14:43.935759956 +0000 UTC m=+1101.865055894" lastFinishedPulling="2026-02-16 21:14:49.065830213 +0000 UTC m=+1106.995126151" observedRunningTime="2026-02-16 21:14:50.159612216 +0000 UTC m=+1108.088908174" watchObservedRunningTime="2026-02-16 21:14:50.18170339 +0000 UTC m=+1108.110999328" Feb 16 21:14:50 crc kubenswrapper[4811]: E0216 21:14:50.911964 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28abda8f_8607_4e77_85d4_ab50171c709a.slice/crio-conmon-6dd7ff72a64e573211cd03956e1e245e6b60ba617515e7e6149ec48d89339f85.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139041 4811 generic.go:334] "Generic (PLEG): container finished" podID="28abda8f-8607-4e77-85d4-ab50171c709a" containerID="1b67498397efda23989c5ad9ff1328c369c7fe3142af38d1d41a9f38ae7aa197" exitCode=0 Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139364 4811 generic.go:334] "Generic (PLEG): container finished" podID="28abda8f-8607-4e77-85d4-ab50171c709a" containerID="ac659e6135f2024beb4cdbe96cf7d40ce7760d91454336670aa36eae385eb2cb" exitCode=2 Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139378 4811 generic.go:334] "Generic (PLEG): container finished" podID="28abda8f-8607-4e77-85d4-ab50171c709a" containerID="22e42a6660413f158e6590b5ff4b4d4e7bb05829329b7abc71383a539c5a63cd" exitCode=0 Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139389 4811 generic.go:334] "Generic (PLEG): container finished" podID="28abda8f-8607-4e77-85d4-ab50171c709a" containerID="6dd7ff72a64e573211cd03956e1e245e6b60ba617515e7e6149ec48d89339f85" exitCode=0 Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139128 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerDied","Data":"1b67498397efda23989c5ad9ff1328c369c7fe3142af38d1d41a9f38ae7aa197"} Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139479 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerDied","Data":"ac659e6135f2024beb4cdbe96cf7d40ce7760d91454336670aa36eae385eb2cb"} Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139495 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerDied","Data":"22e42a6660413f158e6590b5ff4b4d4e7bb05829329b7abc71383a539c5a63cd"} Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerDied","Data":"6dd7ff72a64e573211cd03956e1e245e6b60ba617515e7e6149ec48d89339f85"} Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139515 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28abda8f-8607-4e77-85d4-ab50171c709a","Type":"ContainerDied","Data":"e6e6ae8a6a2596442ff08ee42811d791ecb37d91459aee3cfb459997f29aa841"} Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.139528 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e6ae8a6a2596442ff08ee42811d791ecb37d91459aee3cfb459997f29aa841" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.172843 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.219717 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-config-data\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.219907 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-combined-ca-bundle\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.220019 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v28t\" (UniqueName: \"kubernetes.io/projected/28abda8f-8607-4e77-85d4-ab50171c709a-kube-api-access-5v28t\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.220099 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-scripts\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.220169 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-log-httpd\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.220241 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-sg-core-conf-yaml\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.220267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-run-httpd\") pod \"28abda8f-8607-4e77-85d4-ab50171c709a\" (UID: \"28abda8f-8607-4e77-85d4-ab50171c709a\") " Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.220773 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.221015 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.229425 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28abda8f-8607-4e77-85d4-ab50171c709a-kube-api-access-5v28t" (OuterVolumeSpecName: "kube-api-access-5v28t") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "kube-api-access-5v28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.236782 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-scripts" (OuterVolumeSpecName: "scripts") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.248749 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.250023 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-69cc95b6b9-n22wz" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.253136 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.325006 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.325043 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.325057 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.325090 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28abda8f-8607-4e77-85d4-ab50171c709a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.325103 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v28t\" (UniqueName: \"kubernetes.io/projected/28abda8f-8607-4e77-85d4-ab50171c709a-kube-api-access-5v28t\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.341307 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.369880 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-config-data" (OuterVolumeSpecName: "config-data") pod "28abda8f-8607-4e77-85d4-ab50171c709a" (UID: "28abda8f-8607-4e77-85d4-ab50171c709a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.427362 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:51 crc kubenswrapper[4811]: I0216 21:14:51.427444 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28abda8f-8607-4e77-85d4-ab50171c709a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.151715 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.197216 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.215338 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.226530 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.226988 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-central-agent" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227013 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-central-agent" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227029 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ab0943-b07e-4c02-89df-e4768d30f129" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227036 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ab0943-b07e-4c02-89df-e4768d30f129" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227045 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="sg-core" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227052 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="sg-core" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227064 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b96249-bc72-45e5-9d7e-481deb69113b" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227075 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b96249-bc72-45e5-9d7e-481deb69113b" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227093 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81372a4d-8d39-4692-8c7f-ed243fcf3822" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227101 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="81372a4d-8d39-4692-8c7f-ed243fcf3822" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227118 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155cab82-ef10-4ce4-8116-f3f80558987d" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227124 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="155cab82-ef10-4ce4-8116-f3f80558987d" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227132 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-notification-agent" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227138 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-notification-agent" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227146 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b583703-c772-4c1c-895d-6c410f34c439" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227153 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b583703-c772-4c1c-895d-6c410f34c439" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227163 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="proxy-httpd" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227170 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="proxy-httpd" Feb 16 21:14:52 crc kubenswrapper[4811]: E0216 21:14:52.227186 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35741874-d08a-4633-8b5f-438b7a3f6d12" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227211 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="35741874-d08a-4633-8b5f-438b7a3f6d12" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227412 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="35741874-d08a-4633-8b5f-438b7a3f6d12" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227428 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="proxy-httpd" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227439 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-notification-agent" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227451 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ab0943-b07e-4c02-89df-e4768d30f129" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227457 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b583703-c772-4c1c-895d-6c410f34c439" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227464 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b96249-bc72-45e5-9d7e-481deb69113b" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227474 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="81372a4d-8d39-4692-8c7f-ed243fcf3822" containerName="mariadb-account-create-update" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227483 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="sg-core" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227490 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" containerName="ceilometer-central-agent" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.227499 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="155cab82-ef10-4ce4-8116-f3f80558987d" containerName="mariadb-database-create" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.229412 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.233302 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.233394 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.243921 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-run-httpd\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.244115 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87qr\" (UniqueName: \"kubernetes.io/projected/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-kube-api-access-j87qr\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.244272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.244303 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-log-httpd\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.244373 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-config-data\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.244452 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-scripts\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.244514 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.259797 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.346815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.346962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-run-httpd\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.347592 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-run-httpd\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.347751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87qr\" (UniqueName: \"kubernetes.io/projected/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-kube-api-access-j87qr\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.348236 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.348270 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-log-httpd\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.348767 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-config-data\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.348829 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-scripts\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.348996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-log-httpd\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.352151 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-scripts\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.352420 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-config-data\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.362789 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.372984 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.376013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87qr\" (UniqueName: \"kubernetes.io/projected/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-kube-api-access-j87qr\") pod \"ceilometer-0\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.551577 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.727059 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28abda8f-8607-4e77-85d4-ab50171c709a" path="/var/lib/kubelet/pods/28abda8f-8607-4e77-85d4-ab50171c709a/volumes" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.902393 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76ccfcd95-9jxjj" Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.985377 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-99ff95c78-p6wd9"] Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.985640 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-99ff95c78-p6wd9" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-api" containerID="cri-o://204a71e35621245a2a52081f4a6d75f90c3b70bce73dc68fbd4d1e29aa0b42b8" gracePeriod=30 Feb 16 21:14:52 crc kubenswrapper[4811]: I0216 21:14:52.986079 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-99ff95c78-p6wd9" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-httpd" containerID="cri-o://7a68d629e5d4a099bdcbf7a0ad086cc31984616c233d3c43845192be84303049" gracePeriod=30 Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.019128 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.161913 4811 generic.go:334] "Generic (PLEG): container finished" podID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerID="7a68d629e5d4a099bdcbf7a0ad086cc31984616c233d3c43845192be84303049" exitCode=0 Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.161948 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99ff95c78-p6wd9" event={"ID":"0b61fef6-46f1-4197-9eef-c6fa330e5fef","Type":"ContainerDied","Data":"7a68d629e5d4a099bdcbf7a0ad086cc31984616c233d3c43845192be84303049"} Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.163711 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerStarted","Data":"ee6b9caf2613af135dc260cda582d4f34c0ed845aa5a286a9d1e0da2c82726e2"} Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.604205 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2gwq"] Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.605532 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.607481 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.608003 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8g8lg" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.608326 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.612403 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2gwq"] Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.669875 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn4x8\" (UniqueName: \"kubernetes.io/projected/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-kube-api-access-fn4x8\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.669935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-scripts\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.670087 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.670139 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-config-data\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.772336 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.772406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-config-data\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.772444 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn4x8\" (UniqueName: \"kubernetes.io/projected/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-kube-api-access-fn4x8\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.772484 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-scripts\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.779760 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-scripts\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.779921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.790247 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn4x8\" (UniqueName: \"kubernetes.io/projected/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-kube-api-access-fn4x8\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.794181 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-config-data\") pod \"nova-cell0-conductor-db-sync-l2gwq\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:53 crc kubenswrapper[4811]: I0216 21:14:53.986285 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:14:54 crc kubenswrapper[4811]: I0216 21:14:54.177859 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerStarted","Data":"d8c2ce534b13725cb805f383798d7a497c013ae8fb743df9a7b062acb873fd20"} Feb 16 21:14:54 crc kubenswrapper[4811]: I0216 21:14:54.493189 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2gwq"] Feb 16 21:14:55 crc kubenswrapper[4811]: I0216 21:14:55.194680 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerStarted","Data":"c41d8b259c4537d4028d3d2da9f33741d63fd285ed64d57bfdd275cf71edd5bb"} Feb 16 21:14:55 crc kubenswrapper[4811]: I0216 21:14:55.198617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" event={"ID":"9a532407-8a9b-4764-ac7b-d4af3c9e53e5","Type":"ContainerStarted","Data":"816506f10210ab4e0405316bd5192acb760a017c8c900a8adaa55f17e99db6bb"} Feb 16 21:14:56 crc kubenswrapper[4811]: I0216 21:14:56.210593 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerStarted","Data":"ec7ab4f5b3ff134d2bfb4d44745167e7370065c017b1f79200dabee3a7bf5642"} Feb 16 21:14:56 crc kubenswrapper[4811]: E0216 21:14:56.844438 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:14:56 crc kubenswrapper[4811]: E0216 21:14:56.844983 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:14:56 crc kubenswrapper[4811]: E0216 21:14:56.845210 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:14:56 crc kubenswrapper[4811]: E0216 21:14:56.846485 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:14:57 crc kubenswrapper[4811]: I0216 21:14:57.231003 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerStarted","Data":"1dee70dc54437dd47dc9879e268669434d465f075059bf5d5164ff704dda67b7"} Feb 16 21:14:57 crc kubenswrapper[4811]: I0216 21:14:57.231169 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:14:57 crc kubenswrapper[4811]: I0216 21:14:57.251674 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.912958682 podStartE2EDuration="5.251658835s" podCreationTimestamp="2026-02-16 21:14:52 +0000 UTC" firstStartedPulling="2026-02-16 21:14:53.024918095 +0000 UTC m=+1110.954214033" lastFinishedPulling="2026-02-16 21:14:56.363618248 +0000 UTC m=+1114.292914186" observedRunningTime="2026-02-16 21:14:57.24834631 +0000 UTC m=+1115.177642268" watchObservedRunningTime="2026-02-16 21:14:57.251658835 +0000 UTC m=+1115.180954773" Feb 16 21:14:57 crc kubenswrapper[4811]: I0216 21:14:57.970465 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:57 crc kubenswrapper[4811]: I0216 21:14:57.972699 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5db7fb44c6-5zcls" Feb 16 21:14:58 crc kubenswrapper[4811]: I0216 21:14:58.068280 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f984f4f8d-xr8xc"] Feb 16 21:14:58 crc kubenswrapper[4811]: I0216 21:14:58.068521 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f984f4f8d-xr8xc" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-log" containerID="cri-o://7f57acf64a906fee962c584307b5fbbabe4c24fd6f58ddcdad9851f0aad15a41" gracePeriod=30 Feb 16 21:14:58 crc kubenswrapper[4811]: I0216 21:14:58.068647 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f984f4f8d-xr8xc" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-api" containerID="cri-o://529f36854fdaf3d5a4dc41bb0b3da42100c9da10fd73920867e2036772820eb1" gracePeriod=30 Feb 16 21:14:58 crc kubenswrapper[4811]: I0216 21:14:58.164266 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:14:58 crc kubenswrapper[4811]: I0216 21:14:58.252029 4811 generic.go:334] "Generic (PLEG): container finished" podID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerID="7f57acf64a906fee962c584307b5fbbabe4c24fd6f58ddcdad9851f0aad15a41" exitCode=143 Feb 16 21:14:58 crc kubenswrapper[4811]: I0216 21:14:58.253000 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f984f4f8d-xr8xc" event={"ID":"0b20ea8e-53de-433a-8739-88f1da6a3af5","Type":"ContainerDied","Data":"7f57acf64a906fee962c584307b5fbbabe4c24fd6f58ddcdad9851f0aad15a41"} Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.265297 4811 generic.go:334] "Generic (PLEG): container finished" podID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerID="204a71e35621245a2a52081f4a6d75f90c3b70bce73dc68fbd4d1e29aa0b42b8" exitCode=0 Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.265707 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-central-agent" containerID="cri-o://d8c2ce534b13725cb805f383798d7a497c013ae8fb743df9a7b062acb873fd20" gracePeriod=30 Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.265338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99ff95c78-p6wd9" event={"ID":"0b61fef6-46f1-4197-9eef-c6fa330e5fef","Type":"ContainerDied","Data":"204a71e35621245a2a52081f4a6d75f90c3b70bce73dc68fbd4d1e29aa0b42b8"} Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.266088 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="proxy-httpd" containerID="cri-o://1dee70dc54437dd47dc9879e268669434d465f075059bf5d5164ff704dda67b7" gracePeriod=30 Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.266129 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="sg-core" containerID="cri-o://ec7ab4f5b3ff134d2bfb4d44745167e7370065c017b1f79200dabee3a7bf5642" gracePeriod=30 Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.266161 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-notification-agent" containerID="cri-o://c41d8b259c4537d4028d3d2da9f33741d63fd285ed64d57bfdd275cf71edd5bb" gracePeriod=30 Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.618467 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.619333 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-log" containerID="cri-o://0435fc64d5d23e794f36b75641c72c486ab505052d2c63966a9f397382a079b1" gracePeriod=30 Feb 16 21:14:59 crc kubenswrapper[4811]: I0216 21:14:59.619418 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-httpd" containerID="cri-o://0b1b9e66c581157ec0655effc752edb74d35f7ff3f8bff1467108dfe4ef8b1e5" gracePeriod=30 Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.162508 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj"] Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.164120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.166526 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.167127 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.187187 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj"] Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.284436 4811 generic.go:334] "Generic (PLEG): container finished" podID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerID="1dee70dc54437dd47dc9879e268669434d465f075059bf5d5164ff704dda67b7" exitCode=0 Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.284469 4811 generic.go:334] "Generic (PLEG): container finished" podID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerID="ec7ab4f5b3ff134d2bfb4d44745167e7370065c017b1f79200dabee3a7bf5642" exitCode=2 Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.284478 4811 generic.go:334] "Generic (PLEG): container finished" podID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerID="c41d8b259c4537d4028d3d2da9f33741d63fd285ed64d57bfdd275cf71edd5bb" exitCode=0 Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.284514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerDied","Data":"1dee70dc54437dd47dc9879e268669434d465f075059bf5d5164ff704dda67b7"} Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.284544 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerDied","Data":"ec7ab4f5b3ff134d2bfb4d44745167e7370065c017b1f79200dabee3a7bf5642"} Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.284553 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerDied","Data":"c41d8b259c4537d4028d3d2da9f33741d63fd285ed64d57bfdd275cf71edd5bb"} Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.286549 4811 generic.go:334] "Generic (PLEG): container finished" podID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerID="0435fc64d5d23e794f36b75641c72c486ab505052d2c63966a9f397382a079b1" exitCode=143 Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.286647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5","Type":"ContainerDied","Data":"0435fc64d5d23e794f36b75641c72c486ab505052d2c63966a9f397382a079b1"} Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.313817 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c1f2daa-8812-4798-a597-c73a581328a6-secret-volume\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.313914 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4pr\" (UniqueName: \"kubernetes.io/projected/8c1f2daa-8812-4798-a597-c73a581328a6-kube-api-access-wk4pr\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.314237 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1f2daa-8812-4798-a597-c73a581328a6-config-volume\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.415395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c1f2daa-8812-4798-a597-c73a581328a6-secret-volume\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.415522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4pr\" (UniqueName: \"kubernetes.io/projected/8c1f2daa-8812-4798-a597-c73a581328a6-kube-api-access-wk4pr\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.415639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1f2daa-8812-4798-a597-c73a581328a6-config-volume\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.416497 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1f2daa-8812-4798-a597-c73a581328a6-config-volume\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.423257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c1f2daa-8812-4798-a597-c73a581328a6-secret-volume\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.437931 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4pr\" (UniqueName: \"kubernetes.io/projected/8c1f2daa-8812-4798-a597-c73a581328a6-kube-api-access-wk4pr\") pod \"collect-profiles-29521275-7p9qj\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.490855 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.914950 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.915259 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-log" containerID="cri-o://151f8e84f148fca50c002bb8cb351ef4dbdbecd8430f19603ed7c116c64706c2" gracePeriod=30 Feb 16 21:15:00 crc kubenswrapper[4811]: I0216 21:15:00.915389 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-httpd" containerID="cri-o://2791258945c3659b76f922d401532a57d765156bd11706358c7d4e54e3e96c97" gracePeriod=30 Feb 16 21:15:01 crc kubenswrapper[4811]: I0216 21:15:01.301434 4811 generic.go:334] "Generic (PLEG): container finished" podID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerID="151f8e84f148fca50c002bb8cb351ef4dbdbecd8430f19603ed7c116c64706c2" exitCode=143 Feb 16 21:15:01 crc kubenswrapper[4811]: I0216 21:15:01.301515 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"13a1f6a9-4084-46c9-be98-b2a8f2a98a21","Type":"ContainerDied","Data":"151f8e84f148fca50c002bb8cb351ef4dbdbecd8430f19603ed7c116c64706c2"} Feb 16 21:15:02 crc kubenswrapper[4811]: I0216 21:15:02.372669 4811 generic.go:334] "Generic (PLEG): container finished" podID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerID="d8c2ce534b13725cb805f383798d7a497c013ae8fb743df9a7b062acb873fd20" exitCode=0 Feb 16 21:15:02 crc kubenswrapper[4811]: I0216 21:15:02.372735 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerDied","Data":"d8c2ce534b13725cb805f383798d7a497c013ae8fb743df9a7b062acb873fd20"} Feb 16 21:15:02 crc kubenswrapper[4811]: I0216 21:15:02.384095 4811 generic.go:334] "Generic (PLEG): container finished" podID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerID="529f36854fdaf3d5a4dc41bb0b3da42100c9da10fd73920867e2036772820eb1" exitCode=0 Feb 16 21:15:02 crc kubenswrapper[4811]: I0216 21:15:02.384139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f984f4f8d-xr8xc" event={"ID":"0b20ea8e-53de-433a-8739-88f1da6a3af5","Type":"ContainerDied","Data":"529f36854fdaf3d5a4dc41bb0b3da42100c9da10fd73920867e2036772820eb1"} Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.398389 4811 generic.go:334] "Generic (PLEG): container finished" podID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerID="0b1b9e66c581157ec0655effc752edb74d35f7ff3f8bff1467108dfe4ef8b1e5" exitCode=0 Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.398517 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5","Type":"ContainerDied","Data":"0b1b9e66c581157ec0655effc752edb74d35f7ff3f8bff1467108dfe4ef8b1e5"} Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.786408 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.896218 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-ovndb-tls-certs\") pod \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.896597 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-combined-ca-bundle\") pod \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.896704 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9nf7\" (UniqueName: \"kubernetes.io/projected/0b61fef6-46f1-4197-9eef-c6fa330e5fef-kube-api-access-s9nf7\") pod \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.896771 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-config\") pod \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.896859 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-httpd-config\") pod \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\" (UID: \"0b61fef6-46f1-4197-9eef-c6fa330e5fef\") " Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.904115 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0b61fef6-46f1-4197-9eef-c6fa330e5fef" (UID: "0b61fef6-46f1-4197-9eef-c6fa330e5fef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.906069 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b61fef6-46f1-4197-9eef-c6fa330e5fef-kube-api-access-s9nf7" (OuterVolumeSpecName: "kube-api-access-s9nf7") pod "0b61fef6-46f1-4197-9eef-c6fa330e5fef" (UID: "0b61fef6-46f1-4197-9eef-c6fa330e5fef"). InnerVolumeSpecName "kube-api-access-s9nf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.968265 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b61fef6-46f1-4197-9eef-c6fa330e5fef" (UID: "0b61fef6-46f1-4197-9eef-c6fa330e5fef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.970100 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-config" (OuterVolumeSpecName: "config") pod "0b61fef6-46f1-4197-9eef-c6fa330e5fef" (UID: "0b61fef6-46f1-4197-9eef-c6fa330e5fef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.999472 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.999500 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.999512 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9nf7\" (UniqueName: \"kubernetes.io/projected/0b61fef6-46f1-4197-9eef-c6fa330e5fef-kube-api-access-s9nf7\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:03 crc kubenswrapper[4811]: I0216 21:15:03.999521 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.068414 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0b61fef6-46f1-4197-9eef-c6fa330e5fef" (UID: "0b61fef6-46f1-4197-9eef-c6fa330e5fef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.096408 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.108773 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b20ea8e-53de-433a-8739-88f1da6a3af5-logs\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.109485 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-public-tls-certs\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.110089 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-combined-ca-bundle\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.110651 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b20ea8e-53de-433a-8739-88f1da6a3af5-logs" (OuterVolumeSpecName: "logs") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.111001 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csmhw\" (UniqueName: \"kubernetes.io/projected/0b20ea8e-53de-433a-8739-88f1da6a3af5-kube-api-access-csmhw\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.111436 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-scripts\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.111714 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-internal-tls-certs\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.111860 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-config-data\") pod \"0b20ea8e-53de-433a-8739-88f1da6a3af5\" (UID: \"0b20ea8e-53de-433a-8739-88f1da6a3af5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.113180 4811 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b61fef6-46f1-4197-9eef-c6fa330e5fef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.113292 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b20ea8e-53de-433a-8739-88f1da6a3af5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.165595 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b20ea8e-53de-433a-8739-88f1da6a3af5-kube-api-access-csmhw" (OuterVolumeSpecName: "kube-api-access-csmhw") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "kube-api-access-csmhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.192370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-scripts" (OuterVolumeSpecName: "scripts") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.222897 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.222930 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csmhw\" (UniqueName: \"kubernetes.io/projected/0b20ea8e-53de-433a-8739-88f1da6a3af5-kube-api-access-csmhw\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.368595 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj"] Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.397148 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.407646 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.414447 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.417358 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b","Type":"ContainerDied","Data":"ee6b9caf2613af135dc260cda582d4f34c0ed845aa5a286a9d1e0da2c82726e2"} Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.417424 4811 scope.go:117] "RemoveContainer" containerID="1dee70dc54437dd47dc9879e268669434d465f075059bf5d5164ff704dda67b7" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.417373 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.426864 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99ff95c78-p6wd9" event={"ID":"0b61fef6-46f1-4197-9eef-c6fa330e5fef","Type":"ContainerDied","Data":"05396d6012dd37b3b89acdc718f9e716c0c577025fc1168609f6564ecaa143d9"} Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.426948 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99ff95c78-p6wd9" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431080 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-scripts\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431253 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j87qr\" (UniqueName: \"kubernetes.io/projected/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-kube-api-access-j87qr\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-run-httpd\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431385 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-config-data\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431482 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-combined-ca-bundle\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431533 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-sg-core-conf-yaml\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.431654 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-log-httpd\") pod \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\" (UID: \"6b6a15d7-5b70-425d-aa1d-51e1fc2c099b\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.432246 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.432784 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.436177 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.437829 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5","Type":"ContainerDied","Data":"75f7a5c981de699fc3ff79e90f3c543272c09e24d25b079c78dc5c6984d0e922"} Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.437905 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.439463 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-kube-api-access-j87qr" (OuterVolumeSpecName: "kube-api-access-j87qr") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "kube-api-access-j87qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.448581 4811 generic.go:334] "Generic (PLEG): container finished" podID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerID="2791258945c3659b76f922d401532a57d765156bd11706358c7d4e54e3e96c97" exitCode=0 Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.448767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"13a1f6a9-4084-46c9-be98-b2a8f2a98a21","Type":"ContainerDied","Data":"2791258945c3659b76f922d401532a57d765156bd11706358c7d4e54e3e96c97"} Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.452115 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" event={"ID":"8c1f2daa-8812-4798-a597-c73a581328a6","Type":"ContainerStarted","Data":"294fbad66c3b0041634af277b597da65fbd2486d83c4c583a3bffec0f7edd5b6"} Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.454669 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f984f4f8d-xr8xc" event={"ID":"0b20ea8e-53de-433a-8739-88f1da6a3af5","Type":"ContainerDied","Data":"e919d29ade777d8d421c57a91a323e57a802794dd196a9c8fb124b354b4fa856"} Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.454819 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f984f4f8d-xr8xc" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.465512 4811 scope.go:117] "RemoveContainer" containerID="ec7ab4f5b3ff134d2bfb4d44745167e7370065c017b1f79200dabee3a7bf5642" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.474732 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-scripts" (OuterVolumeSpecName: "scripts") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.533565 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-combined-ca-bundle\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.533698 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-public-tls-certs\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.533837 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99tk\" (UniqueName: \"kubernetes.io/projected/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-kube-api-access-d99tk\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.533865 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-httpd-run\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.533896 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-scripts\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.533966 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-config-data\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.534133 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.534164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-logs\") pod \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\" (UID: \"7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5\") " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.534607 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.534626 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.534636 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j87qr\" (UniqueName: \"kubernetes.io/projected/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-kube-api-access-j87qr\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.534645 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.535055 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.535206 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-logs" (OuterVolumeSpecName: "logs") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.535828 4811 scope.go:117] "RemoveContainer" containerID="c41d8b259c4537d4028d3d2da9f33741d63fd285ed64d57bfdd275cf71edd5bb" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.540614 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-99ff95c78-p6wd9"] Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.554671 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-99ff95c78-p6wd9"] Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.562190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-config-data" (OuterVolumeSpecName: "config-data") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.565382 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-scripts" (OuterVolumeSpecName: "scripts") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.578740 4811 scope.go:117] "RemoveContainer" containerID="d8c2ce534b13725cb805f383798d7a497c013ae8fb743df9a7b062acb873fd20" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.581054 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-kube-api-access-d99tk" (OuterVolumeSpecName: "kube-api-access-d99tk") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "kube-api-access-d99tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.597704 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61" (OuterVolumeSpecName: "glance") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "pvc-953416f2-8442-4b16-a122-58a357229e61". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.608911 4811 scope.go:117] "RemoveContainer" containerID="7a68d629e5d4a099bdcbf7a0ad086cc31984616c233d3c43845192be84303049" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.615101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.635989 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99tk\" (UniqueName: \"kubernetes.io/projected/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-kube-api-access-d99tk\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.636009 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.636018 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.636039 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") on node \"crc\" " Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.636049 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.636059 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.636067 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.639073 4811 scope.go:117] "RemoveContainer" containerID="204a71e35621245a2a52081f4a6d75f90c3b70bce73dc68fbd4d1e29aa0b42b8" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.684479 4811 scope.go:117] "RemoveContainer" containerID="0b1b9e66c581157ec0655effc752edb74d35f7ff3f8bff1467108dfe4ef8b1e5" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.688395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.699965 4811 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.700143 4811 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-953416f2-8442-4b16-a122-58a357229e61" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61") on node "crc" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.715809 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.721740 4811 scope.go:117] "RemoveContainer" containerID="0435fc64d5d23e794f36b75641c72c486ab505052d2c63966a9f397382a079b1" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.728818 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" path="/var/lib/kubelet/pods/0b61fef6-46f1-4197-9eef-c6fa330e5fef/volumes" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.736476 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-config-data" (OuterVolumeSpecName: "config-data") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.738339 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.738368 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.738378 4811 reconciler_common.go:293] "Volume detached for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.738388 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.813129 4811 scope.go:117] "RemoveContainer" containerID="529f36854fdaf3d5a4dc41bb0b3da42100c9da10fd73920867e2036772820eb1" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.815082 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" (UID: "7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.823470 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0b20ea8e-53de-433a-8739-88f1da6a3af5" (UID: "0b20ea8e-53de-433a-8739-88f1da6a3af5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.840845 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.840880 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b20ea8e-53de-433a-8739-88f1da6a3af5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.864260 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.921465 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-config-data" (OuterVolumeSpecName: "config-data") pod "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" (UID: "6b6a15d7-5b70-425d-aa1d-51e1fc2c099b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.942969 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:04 crc kubenswrapper[4811]: I0216 21:15:04.943005 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.074037 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.093101 4811 scope.go:117] "RemoveContainer" containerID="7f57acf64a906fee962c584307b5fbbabe4c24fd6f58ddcdad9851f0aad15a41" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.107412 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.122006 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133095 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133463 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="sg-core" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133483 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="sg-core" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133495 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-log" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133502 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-log" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133511 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-api" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133517 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-api" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133525 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133530 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133548 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="proxy-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133553 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="proxy-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133566 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-api" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133571 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-api" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133582 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-log" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133589 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-log" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133597 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-central-agent" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133603 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-central-agent" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133617 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-log" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133623 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-log" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133635 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133640 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133661 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133667 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: E0216 21:15:05.133678 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-notification-agent" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133684 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-notification-agent" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133848 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133859 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-log" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133866 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-log" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133877 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-notification-agent" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133886 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" containerName="glance-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133894 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" containerName="glance-log" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133907 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="proxy-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133917 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="sg-core" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133931 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" containerName="ceilometer-central-agent" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133942 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-httpd" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133950 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b61fef6-46f1-4197-9eef-c6fa330e5fef" containerName="neutron-api" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.133958 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" containerName="placement-api" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.135613 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.140336 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.140531 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.152966 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.153265 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-scripts\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.153314 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.153353 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-log-httpd\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.153435 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-run-httpd\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.153465 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn7qx\" (UniqueName: \"kubernetes.io/projected/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-kube-api-access-qn7qx\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.156364 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-config-data\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.185950 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.221248 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f984f4f8d-xr8xc"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.231974 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5f984f4f8d-xr8xc"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257600 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-config-data\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257656 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-scripts\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257788 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257824 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-combined-ca-bundle\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257876 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-internal-tls-certs\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257953 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvljs\" (UniqueName: \"kubernetes.io/projected/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-kube-api-access-zvljs\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.257974 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-logs\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-httpd-run\") pod \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\" (UID: \"13a1f6a9-4084-46c9-be98-b2a8f2a98a21\") " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258151 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-log-httpd\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258265 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-run-httpd\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258290 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn7qx\" (UniqueName: \"kubernetes.io/projected/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-kube-api-access-qn7qx\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-config-data\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258360 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.258388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-scripts\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.260552 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.262642 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-logs" (OuterVolumeSpecName: "logs") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.266590 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-scripts\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.266670 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.267652 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-log-httpd\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.269842 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-run-httpd\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.271428 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.273963 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-kube-api-access-zvljs" (OuterVolumeSpecName: "kube-api-access-zvljs") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "kube-api-access-zvljs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.283112 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-config-data\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.283802 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-scripts" (OuterVolumeSpecName: "scripts") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.286960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.295292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn7qx\" (UniqueName: \"kubernetes.io/projected/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-kube-api-access-qn7qx\") pod \"ceilometer-0\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.300655 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e" (OuterVolumeSpecName: "glance") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.303978 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.332170 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.334327 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.336582 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-config-data" (OuterVolumeSpecName: "config-data") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.336775 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.339112 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.345856 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.352792 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.352826 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "13a1f6a9-4084-46c9-be98-b2a8f2a98a21" (UID: "13a1f6a9-4084-46c9-be98-b2a8f2a98a21"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361478 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361514 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361524 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvljs\" (UniqueName: \"kubernetes.io/projected/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-kube-api-access-zvljs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361534 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361543 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361552 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361560 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a1f6a9-4084-46c9-be98-b2a8f2a98a21-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.361586 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") on node \"crc\" " Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.386238 4811 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.386400 4811 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e") on node "crc" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464081 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464119 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24e0a62a-333f-499c-b046-62e94e2ff0be-logs\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464165 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464254 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll2w\" (UniqueName: \"kubernetes.io/projected/24e0a62a-333f-499c-b046-62e94e2ff0be-kube-api-access-tll2w\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464353 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-config-data\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464374 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-scripts\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464390 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24e0a62a-333f-499c-b046-62e94e2ff0be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.464437 4811 reconciler_common.go:293] "Volume detached for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.465844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.471794 4811 generic.go:334] "Generic (PLEG): container finished" podID="8c1f2daa-8812-4798-a597-c73a581328a6" containerID="935fd0ef3ab605fea3e3efcd278ebd821380752e1f61dddef1910d91bd75d17a" exitCode=0 Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.471862 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" event={"ID":"8c1f2daa-8812-4798-a597-c73a581328a6","Type":"ContainerDied","Data":"935fd0ef3ab605fea3e3efcd278ebd821380752e1f61dddef1910d91bd75d17a"} Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.474143 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" event={"ID":"9a532407-8a9b-4764-ac7b-d4af3c9e53e5","Type":"ContainerStarted","Data":"9b0c748e1acb21938335555bd06b6e93d705d12d26601030596ef94865519b33"} Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.480934 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"13a1f6a9-4084-46c9-be98-b2a8f2a98a21","Type":"ContainerDied","Data":"1071ab1cd1bb6da7f3bc4f37ff9f40218474a25bdbbe07bb6bdc45724a83ab24"} Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.480975 4811 scope.go:117] "RemoveContainer" containerID="2791258945c3659b76f922d401532a57d765156bd11706358c7d4e54e3e96c97" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.481067 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.525376 4811 scope.go:117] "RemoveContainer" containerID="151f8e84f148fca50c002bb8cb351ef4dbdbecd8430f19603ed7c116c64706c2" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.555665 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" podStartSLOduration=3.349039295 podStartE2EDuration="12.555644224s" podCreationTimestamp="2026-02-16 21:14:53 +0000 UTC" firstStartedPulling="2026-02-16 21:14:54.502762729 +0000 UTC m=+1112.432058667" lastFinishedPulling="2026-02-16 21:15:03.709367658 +0000 UTC m=+1121.638663596" observedRunningTime="2026-02-16 21:15:05.519686095 +0000 UTC m=+1123.448982033" watchObservedRunningTime="2026-02-16 21:15:05.555644224 +0000 UTC m=+1123.484940162" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.557670 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.570871 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-config-data\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.570943 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-scripts\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.570977 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24e0a62a-333f-499c-b046-62e94e2ff0be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.571059 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24e0a62a-333f-499c-b046-62e94e2ff0be-logs\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.571082 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.571219 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.571441 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.571576 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll2w\" (UniqueName: \"kubernetes.io/projected/24e0a62a-333f-499c-b046-62e94e2ff0be-kube-api-access-tll2w\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.573109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24e0a62a-333f-499c-b046-62e94e2ff0be-logs\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.573649 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24e0a62a-333f-499c-b046-62e94e2ff0be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.580331 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.584407 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-scripts\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.584496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.584870 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-config-data\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.586562 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.586592 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4be36762ec81406c6a6e28b128f06340bf885474247138da4e2187429bf9f1df/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.587702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e0a62a-333f-499c-b046-62e94e2ff0be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.598228 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.599061 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll2w\" (UniqueName: \"kubernetes.io/projected/24e0a62a-333f-499c-b046-62e94e2ff0be-kube-api-access-tll2w\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.601270 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.604677 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.611306 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.613442 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.649342 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-953416f2-8442-4b16-a122-58a357229e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-953416f2-8442-4b16-a122-58a357229e61\") pod \"glance-default-external-api-0\" (UID: \"24e0a62a-333f-499c-b046-62e94e2ff0be\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.737971 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775712 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775747 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e08db9-4ed5-42e9-bf1e-dec8a1906116-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775808 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775925 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4txm2\" (UniqueName: \"kubernetes.io/projected/19e08db9-4ed5-42e9-bf1e-dec8a1906116-kube-api-access-4txm2\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.775949 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.776036 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e08db9-4ed5-42e9-bf1e-dec8a1906116-logs\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877303 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e08db9-4ed5-42e9-bf1e-dec8a1906116-logs\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877418 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e08db9-4ed5-42e9-bf1e-dec8a1906116-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877446 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877466 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877494 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4txm2\" (UniqueName: \"kubernetes.io/projected/19e08db9-4ed5-42e9-bf1e-dec8a1906116-kube-api-access-4txm2\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.877575 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.879740 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e08db9-4ed5-42e9-bf1e-dec8a1906116-logs\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.880308 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e08db9-4ed5-42e9-bf1e-dec8a1906116-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.882332 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.882456 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/869535b9c27ca8a569925eb99ba7bc75347069a54c745c93c6e314aa9f1a2c6c/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.883750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.883793 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.886208 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.887132 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e08db9-4ed5-42e9-bf1e-dec8a1906116-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.901286 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4txm2\" (UniqueName: \"kubernetes.io/projected/19e08db9-4ed5-42e9-bf1e-dec8a1906116-kube-api-access-4txm2\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:05 crc kubenswrapper[4811]: I0216 21:15:05.931562 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f43c3b2-5d15-4441-930e-a0883c478b0e\") pod \"glance-default-internal-api-0\" (UID: \"19e08db9-4ed5-42e9-bf1e-dec8a1906116\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.003459 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:06 crc kubenswrapper[4811]: W0216 21:15:06.008626 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef4e03d_93b7_4d5c_a4d0_9341a950f03d.slice/crio-c55743a69902c493d935672ae6bc402824283f67620531b7cb632628a5a3285f WatchSource:0}: Error finding container c55743a69902c493d935672ae6bc402824283f67620531b7cb632628a5a3285f: Status 404 returned error can't find the container with id c55743a69902c493d935672ae6bc402824283f67620531b7cb632628a5a3285f Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.232146 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:06 crc kubenswrapper[4811]: W0216 21:15:06.332580 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e0a62a_333f_499c_b046_62e94e2ff0be.slice/crio-49a5bfb2a6567dad36816fb2e8b3b67512978aa6d000163be0c416f7cdb462aa WatchSource:0}: Error finding container 49a5bfb2a6567dad36816fb2e8b3b67512978aa6d000163be0c416f7cdb462aa: Status 404 returned error can't find the container with id 49a5bfb2a6567dad36816fb2e8b3b67512978aa6d000163be0c416f7cdb462aa Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.335497 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.497808 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerStarted","Data":"c55743a69902c493d935672ae6bc402824283f67620531b7cb632628a5a3285f"} Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.499010 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24e0a62a-333f-499c-b046-62e94e2ff0be","Type":"ContainerStarted","Data":"49a5bfb2a6567dad36816fb2e8b3b67512978aa6d000163be0c416f7cdb462aa"} Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.734071 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b20ea8e-53de-433a-8739-88f1da6a3af5" path="/var/lib/kubelet/pods/0b20ea8e-53de-433a-8739-88f1da6a3af5/volumes" Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.734761 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a1f6a9-4084-46c9-be98-b2a8f2a98a21" path="/var/lib/kubelet/pods/13a1f6a9-4084-46c9-be98-b2a8f2a98a21/volumes" Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.735586 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6a15d7-5b70-425d-aa1d-51e1fc2c099b" path="/var/lib/kubelet/pods/6b6a15d7-5b70-425d-aa1d-51e1fc2c099b/volumes" Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.737327 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5" path="/var/lib/kubelet/pods/7b0ca6b2-78c0-47ae-95f7-5ebdb58cb3e5/volumes" Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.825032 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:06 crc kubenswrapper[4811]: I0216 21:15:06.983458 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.150837 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c1f2daa-8812-4798-a597-c73a581328a6-secret-volume\") pod \"8c1f2daa-8812-4798-a597-c73a581328a6\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.150931 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1f2daa-8812-4798-a597-c73a581328a6-config-volume\") pod \"8c1f2daa-8812-4798-a597-c73a581328a6\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.151897 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f2daa-8812-4798-a597-c73a581328a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c1f2daa-8812-4798-a597-c73a581328a6" (UID: "8c1f2daa-8812-4798-a597-c73a581328a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.150964 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4pr\" (UniqueName: \"kubernetes.io/projected/8c1f2daa-8812-4798-a597-c73a581328a6-kube-api-access-wk4pr\") pod \"8c1f2daa-8812-4798-a597-c73a581328a6\" (UID: \"8c1f2daa-8812-4798-a597-c73a581328a6\") " Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.152864 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1f2daa-8812-4798-a597-c73a581328a6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.156637 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1f2daa-8812-4798-a597-c73a581328a6-kube-api-access-wk4pr" (OuterVolumeSpecName: "kube-api-access-wk4pr") pod "8c1f2daa-8812-4798-a597-c73a581328a6" (UID: "8c1f2daa-8812-4798-a597-c73a581328a6"). InnerVolumeSpecName "kube-api-access-wk4pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.157085 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1f2daa-8812-4798-a597-c73a581328a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c1f2daa-8812-4798-a597-c73a581328a6" (UID: "8c1f2daa-8812-4798-a597-c73a581328a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.254678 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c1f2daa-8812-4798-a597-c73a581328a6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.254716 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4pr\" (UniqueName: \"kubernetes.io/projected/8c1f2daa-8812-4798-a597-c73a581328a6-kube-api-access-wk4pr\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.526463 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerStarted","Data":"6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef"} Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.530032 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e08db9-4ed5-42e9-bf1e-dec8a1906116","Type":"ContainerStarted","Data":"cec169702135979c9095eb9cce07d6f497afb7af0ece70e37a40edfcf4e0cb76"} Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.530089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e08db9-4ed5-42e9-bf1e-dec8a1906116","Type":"ContainerStarted","Data":"1da251e5ae48772446f887f809edcf4faf0daa5d39b9371171bad0e7ce7edb60"} Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.564380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24e0a62a-333f-499c-b046-62e94e2ff0be","Type":"ContainerStarted","Data":"cdf5ba39f43fe7b8c43467fc817d54a0494c6f9aca3c44905b41212df980b3fc"} Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.572617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" event={"ID":"8c1f2daa-8812-4798-a597-c73a581328a6","Type":"ContainerDied","Data":"294fbad66c3b0041634af277b597da65fbd2486d83c4c583a3bffec0f7edd5b6"} Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.572662 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294fbad66c3b0041634af277b597da65fbd2486d83c4c583a3bffec0f7edd5b6" Feb 16 21:15:07 crc kubenswrapper[4811]: I0216 21:15:07.572726 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-7p9qj" Feb 16 21:15:08 crc kubenswrapper[4811]: I0216 21:15:08.414238 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:15:08 crc kubenswrapper[4811]: I0216 21:15:08.584955 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerStarted","Data":"7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960"} Feb 16 21:15:08 crc kubenswrapper[4811]: I0216 21:15:08.584995 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerStarted","Data":"de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544"} Feb 16 21:15:08 crc kubenswrapper[4811]: I0216 21:15:08.587652 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e08db9-4ed5-42e9-bf1e-dec8a1906116","Type":"ContainerStarted","Data":"371a4322eb477fd2974524e3d7117052aa28c4be9cba83d57d7bf4924f9bc59d"} Feb 16 21:15:08 crc kubenswrapper[4811]: I0216 21:15:08.589985 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24e0a62a-333f-499c-b046-62e94e2ff0be","Type":"ContainerStarted","Data":"ea966cde44e52d6aa1fb5d77efa34ad8f6b51e0d4409e48c1608141c666be17a"} Feb 16 21:15:08 crc kubenswrapper[4811]: I0216 21:15:08.615396 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.61537985 podStartE2EDuration="3.61537985s" podCreationTimestamp="2026-02-16 21:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:08.611301586 +0000 UTC m=+1126.540597544" watchObservedRunningTime="2026-02-16 21:15:08.61537985 +0000 UTC m=+1126.544675788" Feb 16 21:15:10 crc kubenswrapper[4811]: I0216 21:15:10.616374 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerStarted","Data":"4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f"} Feb 16 21:15:10 crc kubenswrapper[4811]: I0216 21:15:10.616861 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:15:10 crc kubenswrapper[4811]: I0216 21:15:10.650550 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8426242529999999 podStartE2EDuration="5.650524492s" podCreationTimestamp="2026-02-16 21:15:05 +0000 UTC" firstStartedPulling="2026-02-16 21:15:06.011144061 +0000 UTC m=+1123.940439999" lastFinishedPulling="2026-02-16 21:15:09.8190443 +0000 UTC m=+1127.748340238" observedRunningTime="2026-02-16 21:15:10.642070716 +0000 UTC m=+1128.571366684" watchObservedRunningTime="2026-02-16 21:15:10.650524492 +0000 UTC m=+1128.579820440" Feb 16 21:15:10 crc kubenswrapper[4811]: I0216 21:15:10.650719 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.650710937 podStartE2EDuration="5.650710937s" podCreationTimestamp="2026-02-16 21:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:08.647686385 +0000 UTC m=+1126.576982323" watchObservedRunningTime="2026-02-16 21:15:10.650710937 +0000 UTC m=+1128.580006905" Feb 16 21:15:10 crc kubenswrapper[4811]: E0216 21:15:10.705350 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:15:11 crc kubenswrapper[4811]: I0216 21:15:11.380406 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:12 crc kubenswrapper[4811]: I0216 21:15:12.647832 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-central-agent" containerID="cri-o://6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef" gracePeriod=30 Feb 16 21:15:12 crc kubenswrapper[4811]: I0216 21:15:12.647914 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="sg-core" containerID="cri-o://7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960" gracePeriod=30 Feb 16 21:15:12 crc kubenswrapper[4811]: I0216 21:15:12.647954 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-notification-agent" containerID="cri-o://de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544" gracePeriod=30 Feb 16 21:15:12 crc kubenswrapper[4811]: I0216 21:15:12.647962 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="proxy-httpd" containerID="cri-o://4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f" gracePeriod=30 Feb 16 21:15:13 crc kubenswrapper[4811]: I0216 21:15:13.662687 4811 generic.go:334] "Generic (PLEG): container finished" podID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerID="4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f" exitCode=0 Feb 16 21:15:13 crc kubenswrapper[4811]: I0216 21:15:13.663125 4811 generic.go:334] "Generic (PLEG): container finished" podID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerID="7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960" exitCode=2 Feb 16 21:15:13 crc kubenswrapper[4811]: I0216 21:15:13.663144 4811 generic.go:334] "Generic (PLEG): container finished" podID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerID="de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544" exitCode=0 Feb 16 21:15:13 crc kubenswrapper[4811]: I0216 21:15:13.662847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerDied","Data":"4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f"} Feb 16 21:15:13 crc kubenswrapper[4811]: I0216 21:15:13.663235 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerDied","Data":"7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960"} Feb 16 21:15:13 crc kubenswrapper[4811]: I0216 21:15:13.663264 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerDied","Data":"de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544"} Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.551872 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612489 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-config-data\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612585 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-combined-ca-bundle\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612607 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-scripts\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612646 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-log-httpd\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612749 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-run-httpd\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612771 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-sg-core-conf-yaml\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.612801 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn7qx\" (UniqueName: \"kubernetes.io/projected/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-kube-api-access-qn7qx\") pod \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\" (UID: \"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d\") " Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.613727 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.613893 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.618733 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-kube-api-access-qn7qx" (OuterVolumeSpecName: "kube-api-access-qn7qx") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "kube-api-access-qn7qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.619893 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-scripts" (OuterVolumeSpecName: "scripts") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.650626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.677525 4811 generic.go:334] "Generic (PLEG): container finished" podID="9a532407-8a9b-4764-ac7b-d4af3c9e53e5" containerID="9b0c748e1acb21938335555bd06b6e93d705d12d26601030596ef94865519b33" exitCode=0 Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.677605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" event={"ID":"9a532407-8a9b-4764-ac7b-d4af3c9e53e5","Type":"ContainerDied","Data":"9b0c748e1acb21938335555bd06b6e93d705d12d26601030596ef94865519b33"} Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.685485 4811 generic.go:334] "Generic (PLEG): container finished" podID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerID="6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef" exitCode=0 Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.685535 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerDied","Data":"6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef"} Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.685573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef4e03d-93b7-4d5c-a4d0-9341a950f03d","Type":"ContainerDied","Data":"c55743a69902c493d935672ae6bc402824283f67620531b7cb632628a5a3285f"} Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.685600 4811 scope.go:117] "RemoveContainer" containerID="4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.685880 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.715025 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.715237 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.715348 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.715425 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.715522 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn7qx\" (UniqueName: \"kubernetes.io/projected/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-kube-api-access-qn7qx\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.726703 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.749226 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-config-data" (OuterVolumeSpecName: "config-data") pod "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" (UID: "0ef4e03d-93b7-4d5c-a4d0-9341a950f03d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.780430 4811 scope.go:117] "RemoveContainer" containerID="7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.809441 4811 scope.go:117] "RemoveContainer" containerID="de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.817689 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.817724 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.832256 4811 scope.go:117] "RemoveContainer" containerID="6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.860732 4811 scope.go:117] "RemoveContainer" containerID="4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f" Feb 16 21:15:14 crc kubenswrapper[4811]: E0216 21:15:14.861151 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f\": container with ID starting with 4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f not found: ID does not exist" containerID="4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.861264 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f"} err="failed to get container status \"4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f\": rpc error: code = NotFound desc = could not find container \"4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f\": container with ID starting with 4e0b6b773b65a55c3d79878fb3e734f2310fefdf5015f609d2da7c767492508f not found: ID does not exist" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.861345 4811 scope.go:117] "RemoveContainer" containerID="7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960" Feb 16 21:15:14 crc kubenswrapper[4811]: E0216 21:15:14.861789 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960\": container with ID starting with 7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960 not found: ID does not exist" containerID="7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.861836 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960"} err="failed to get container status \"7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960\": rpc error: code = NotFound desc = could not find container \"7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960\": container with ID starting with 7d9dff2a0f983af095cd0917ff1eba41e3cced8a890ab1139bb71c470a230960 not found: ID does not exist" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.861864 4811 scope.go:117] "RemoveContainer" containerID="de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544" Feb 16 21:15:14 crc kubenswrapper[4811]: E0216 21:15:14.862174 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544\": container with ID starting with de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544 not found: ID does not exist" containerID="de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.862435 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544"} err="failed to get container status \"de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544\": rpc error: code = NotFound desc = could not find container \"de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544\": container with ID starting with de73a119376c0a3b4b072a32d8f7779c6a96c94024559e0d040f3ebbde73e544 not found: ID does not exist" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.862528 4811 scope.go:117] "RemoveContainer" containerID="6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef" Feb 16 21:15:14 crc kubenswrapper[4811]: E0216 21:15:14.862953 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef\": container with ID starting with 6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef not found: ID does not exist" containerID="6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef" Feb 16 21:15:14 crc kubenswrapper[4811]: I0216 21:15:14.862999 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef"} err="failed to get container status \"6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef\": rpc error: code = NotFound desc = could not find container \"6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef\": container with ID starting with 6236fc4cf7961be741fa4430cbff75473af1b51a609f3b70f2affeff786489ef not found: ID does not exist" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.037840 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.048607 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.074546 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:15 crc kubenswrapper[4811]: E0216 21:15:15.076434 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="sg-core" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076463 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="sg-core" Feb 16 21:15:15 crc kubenswrapper[4811]: E0216 21:15:15.076489 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-notification-agent" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076500 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-notification-agent" Feb 16 21:15:15 crc kubenswrapper[4811]: E0216 21:15:15.076527 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="proxy-httpd" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076537 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="proxy-httpd" Feb 16 21:15:15 crc kubenswrapper[4811]: E0216 21:15:15.076554 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-central-agent" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076562 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-central-agent" Feb 16 21:15:15 crc kubenswrapper[4811]: E0216 21:15:15.076583 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1f2daa-8812-4798-a597-c73a581328a6" containerName="collect-profiles" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076591 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1f2daa-8812-4798-a597-c73a581328a6" containerName="collect-profiles" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076877 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-central-agent" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076903 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="ceilometer-notification-agent" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076926 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="proxy-httpd" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076939 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1f2daa-8812-4798-a597-c73a581328a6" containerName="collect-profiles" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.076963 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" containerName="sg-core" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.079433 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.084548 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.084750 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123255 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvdf\" (UniqueName: \"kubernetes.io/projected/7e772b95-2fcf-4480-a3a9-926834bd068f-kube-api-access-4pvdf\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123316 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-config-data\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123372 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-run-httpd\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123418 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123447 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-log-httpd\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.123566 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-scripts\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.127129 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225300 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-scripts\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225458 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvdf\" (UniqueName: \"kubernetes.io/projected/7e772b95-2fcf-4480-a3a9-926834bd068f-kube-api-access-4pvdf\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-config-data\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225549 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-run-httpd\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.225663 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-log-httpd\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.226557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-run-httpd\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.227054 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-log-httpd\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.230473 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-scripts\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.231014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.231245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.232807 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-config-data\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.245966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvdf\" (UniqueName: \"kubernetes.io/projected/7e772b95-2fcf-4480-a3a9-926834bd068f-kube-api-access-4pvdf\") pod \"ceilometer-0\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.455955 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.739407 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.739459 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.777954 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.783122 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:15:15 crc kubenswrapper[4811]: I0216 21:15:15.929697 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.060910 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.148645 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-config-data\") pod \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.148744 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn4x8\" (UniqueName: \"kubernetes.io/projected/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-kube-api-access-fn4x8\") pod \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.148781 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-scripts\") pod \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.148989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-combined-ca-bundle\") pod \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\" (UID: \"9a532407-8a9b-4764-ac7b-d4af3c9e53e5\") " Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.154388 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-kube-api-access-fn4x8" (OuterVolumeSpecName: "kube-api-access-fn4x8") pod "9a532407-8a9b-4764-ac7b-d4af3c9e53e5" (UID: "9a532407-8a9b-4764-ac7b-d4af3c9e53e5"). InnerVolumeSpecName "kube-api-access-fn4x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.155416 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-scripts" (OuterVolumeSpecName: "scripts") pod "9a532407-8a9b-4764-ac7b-d4af3c9e53e5" (UID: "9a532407-8a9b-4764-ac7b-d4af3c9e53e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.176473 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a532407-8a9b-4764-ac7b-d4af3c9e53e5" (UID: "9a532407-8a9b-4764-ac7b-d4af3c9e53e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.177360 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-config-data" (OuterVolumeSpecName: "config-data") pod "9a532407-8a9b-4764-ac7b-d4af3c9e53e5" (UID: "9a532407-8a9b-4764-ac7b-d4af3c9e53e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.232450 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.232524 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.252107 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.252156 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn4x8\" (UniqueName: \"kubernetes.io/projected/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-kube-api-access-fn4x8\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.252177 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.252218 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a532407-8a9b-4764-ac7b-d4af3c9e53e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.266161 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.297137 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.717744 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef4e03d-93b7-4d5c-a4d0-9341a950f03d" path="/var/lib/kubelet/pods/0ef4e03d-93b7-4d5c-a4d0-9341a950f03d/volumes" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.748742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" event={"ID":"9a532407-8a9b-4764-ac7b-d4af3c9e53e5","Type":"ContainerDied","Data":"816506f10210ab4e0405316bd5192acb760a017c8c900a8adaa55f17e99db6bb"} Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.748784 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816506f10210ab4e0405316bd5192acb760a017c8c900a8adaa55f17e99db6bb" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.748782 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2gwq" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.751151 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerStarted","Data":"9b34c12ca8bdeb37f523ad8b60a74c33fbdd78d587fa290294a69079f9e9dd98"} Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.751177 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.751188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerStarted","Data":"2bfbfa0291c97ec2644da6143067a7a88be9f444c629137d7872989959f7c351"} Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.752021 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.752038 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.752046 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.799447 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:15:16 crc kubenswrapper[4811]: E0216 21:15:16.799933 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a532407-8a9b-4764-ac7b-d4af3c9e53e5" containerName="nova-cell0-conductor-db-sync" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.799954 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a532407-8a9b-4764-ac7b-d4af3c9e53e5" containerName="nova-cell0-conductor-db-sync" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.800221 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a532407-8a9b-4764-ac7b-d4af3c9e53e5" containerName="nova-cell0-conductor-db-sync" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.801021 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.805026 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.805329 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8g8lg" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.867084 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.875839 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.875917 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vs2j\" (UniqueName: \"kubernetes.io/projected/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-kube-api-access-7vs2j\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.876113 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.978829 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.979357 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.979392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vs2j\" (UniqueName: \"kubernetes.io/projected/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-kube-api-access-7vs2j\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.984778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:16 crc kubenswrapper[4811]: I0216 21:15:16.984789 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:17 crc kubenswrapper[4811]: I0216 21:15:17.000668 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vs2j\" (UniqueName: \"kubernetes.io/projected/d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3-kube-api-access-7vs2j\") pod \"nova-cell0-conductor-0\" (UID: \"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:17 crc kubenswrapper[4811]: I0216 21:15:17.118996 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:17 crc kubenswrapper[4811]: I0216 21:15:17.634455 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:15:17 crc kubenswrapper[4811]: I0216 21:15:17.770802 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3","Type":"ContainerStarted","Data":"af2ba4cbba193a92369bc3a2d14118302e4e18abc48cee23382f19d8ed6b37ca"} Feb 16 21:15:17 crc kubenswrapper[4811]: I0216 21:15:17.783046 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerStarted","Data":"04bd0528099bf3be8bb22c23be9806b58d82414134de6f16a51c6607981eae81"} Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.364018 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.364071 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.657921 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.760834 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.776092 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.796648 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerStarted","Data":"5f7552948810e33b47a000a4e760bdfecb32c8fa7147e94dd879a087123cc73f"} Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.798171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3","Type":"ContainerStarted","Data":"88b4b372129c750c0d8feadfcf669b793eb3d28748d7fd71675f285722c69367"} Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.798413 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.799301 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.839919 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:15:18 crc kubenswrapper[4811]: I0216 21:15:18.847500 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.847485918 podStartE2EDuration="2.847485918s" podCreationTimestamp="2026-02-16 21:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:18.843085435 +0000 UTC m=+1136.772381373" watchObservedRunningTime="2026-02-16 21:15:18.847485918 +0000 UTC m=+1136.776781856" Feb 16 21:15:19 crc kubenswrapper[4811]: I0216 21:15:19.809446 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerStarted","Data":"95c6dd6c62591f720d02ecbdc815e4e595c5c163ad4943220ce0476cabc1fc74"} Feb 16 21:15:19 crc kubenswrapper[4811]: I0216 21:15:19.832789 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.230860341 podStartE2EDuration="4.832775268s" podCreationTimestamp="2026-02-16 21:15:15 +0000 UTC" firstStartedPulling="2026-02-16 21:15:15.934264914 +0000 UTC m=+1133.863560852" lastFinishedPulling="2026-02-16 21:15:19.536179841 +0000 UTC m=+1137.465475779" observedRunningTime="2026-02-16 21:15:19.8289475 +0000 UTC m=+1137.758243438" watchObservedRunningTime="2026-02-16 21:15:19.832775268 +0000 UTC m=+1137.762071196" Feb 16 21:15:20 crc kubenswrapper[4811]: I0216 21:15:20.818621 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:15:21 crc kubenswrapper[4811]: E0216 21:15:21.705213 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.173898 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.772635 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-d8l76"] Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.774860 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.783246 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.783608 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.806096 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-config-data\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.806188 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.806405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-scripts\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.806570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjhp\" (UniqueName: \"kubernetes.io/projected/8793e091-1ee8-417a-86fa-0c22af64bde3-kube-api-access-twjhp\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.809506 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d8l76"] Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.907877 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-config-data\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.907967 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.908039 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-scripts\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.908109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjhp\" (UniqueName: \"kubernetes.io/projected/8793e091-1ee8-417a-86fa-0c22af64bde3-kube-api-access-twjhp\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.913696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.914597 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-config-data\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.919029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-scripts\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.940568 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjhp\" (UniqueName: \"kubernetes.io/projected/8793e091-1ee8-417a-86fa-0c22af64bde3-kube-api-access-twjhp\") pod \"nova-cell0-cell-mapping-d8l76\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.941081 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.942339 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.948472 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:15:22 crc kubenswrapper[4811]: I0216 21:15:22.965938 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.002324 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.005817 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.012235 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmlgd\" (UniqueName: \"kubernetes.io/projected/b320aa81-ef47-476e-8bfe-156fc797f12c-kube-api-access-xmlgd\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.012407 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.012444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-config-data\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.021165 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.036546 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.100473 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.114400 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmlgd\" (UniqueName: \"kubernetes.io/projected/b320aa81-ef47-476e-8bfe-156fc797f12c-kube-api-access-xmlgd\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.114450 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.114886 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhrf\" (UniqueName: \"kubernetes.io/projected/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-kube-api-access-hbhrf\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.114936 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.114956 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-config-data\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.114988 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-config-data\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.115024 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-logs\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.121936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.134921 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.137759 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-config-data\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.140060 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.154961 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.164526 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmlgd\" (UniqueName: \"kubernetes.io/projected/b320aa81-ef47-476e-8bfe-156fc797f12c-kube-api-access-xmlgd\") pod \"nova-scheduler-0\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217326 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-config-data\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhrf\" (UniqueName: \"kubernetes.io/projected/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-kube-api-access-hbhrf\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnbt\" (UniqueName: \"kubernetes.io/projected/e259daff-e55a-47d6-b55d-0c63fa1fe468-kube-api-access-gmnbt\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217468 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-config-data\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217506 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-logs\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217525 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e259daff-e55a-47d6-b55d-0c63fa1fe468-logs\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217547 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.217599 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.223802 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-logs\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.225758 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-s4zmp"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.230644 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-config-data\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.242104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.256800 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.271891 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhrf\" (UniqueName: \"kubernetes.io/projected/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-kube-api-access-hbhrf\") pod \"nova-metadata-0\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.288164 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.304965 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.309253 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.312916 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.354289 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.360913 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.381957 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-config\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.382048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.382080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.382706 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383503 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-config-data\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383597 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgkr9\" (UniqueName: \"kubernetes.io/projected/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-kube-api-access-wgkr9\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383615 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpx7w\" (UniqueName: \"kubernetes.io/projected/c9f7a117-80d9-4da7-a3e9-469976254cb9-kube-api-access-dpx7w\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383709 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnbt\" (UniqueName: \"kubernetes.io/projected/e259daff-e55a-47d6-b55d-0c63fa1fe468-kube-api-access-gmnbt\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383778 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383805 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e259daff-e55a-47d6-b55d-0c63fa1fe468-logs\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383840 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.383935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.389017 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e259daff-e55a-47d6-b55d-0c63fa1fe468-logs\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.391273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-config-data\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.399081 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.410138 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-s4zmp"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.425248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnbt\" (UniqueName: \"kubernetes.io/projected/e259daff-e55a-47d6-b55d-0c63fa1fe468-kube-api-access-gmnbt\") pod \"nova-api-0\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488368 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488439 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-config\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488517 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488572 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488596 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgkr9\" (UniqueName: \"kubernetes.io/projected/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-kube-api-access-wgkr9\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488611 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.488647 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpx7w\" (UniqueName: \"kubernetes.io/projected/c9f7a117-80d9-4da7-a3e9-469976254cb9-kube-api-access-dpx7w\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.491493 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.493177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-config\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.493654 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.493759 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.494213 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.498368 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.498949 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.511064 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpx7w\" (UniqueName: \"kubernetes.io/projected/c9f7a117-80d9-4da7-a3e9-469976254cb9-kube-api-access-dpx7w\") pod \"dnsmasq-dns-bccf8f775-s4zmp\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.513886 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgkr9\" (UniqueName: \"kubernetes.io/projected/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-kube-api-access-wgkr9\") pod \"nova-cell1-novncproxy-0\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.596838 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.618023 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.682668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.809886 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d8l76"] Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.896724 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d8l76" event={"ID":"8793e091-1ee8-417a-86fa-0c22af64bde3","Type":"ContainerStarted","Data":"336af0063a2a5f64c6209dac0688d01bd40c2956e48d08f649b3bf66896b176e"} Feb 16 21:15:23 crc kubenswrapper[4811]: I0216 21:15:23.989981 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.113637 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hcp7j"] Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.116091 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.119710 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.119917 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.143052 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hcp7j"] Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.153526 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.203692 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-scripts\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.203846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-config-data\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.203869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2sz6\" (UniqueName: \"kubernetes.io/projected/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-kube-api-access-m2sz6\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.203895 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.305409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-config-data\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.305450 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2sz6\" (UniqueName: \"kubernetes.io/projected/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-kube-api-access-m2sz6\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.305470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.305542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-scripts\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.313498 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-config-data\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.314573 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.314663 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-scripts\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.326813 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2sz6\" (UniqueName: \"kubernetes.io/projected/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-kube-api-access-m2sz6\") pod \"nova-cell1-conductor-db-sync-hcp7j\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.444747 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-s4zmp"] Feb 16 21:15:24 crc kubenswrapper[4811]: W0216 21:15:24.448984 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9f7a117_80d9_4da7_a3e9_469976254cb9.slice/crio-2fdc603e15c5d997cef74281b893cb4c6b518e53e00fc3ee8be8532d79dfe3fd WatchSource:0}: Error finding container 2fdc603e15c5d997cef74281b893cb4c6b518e53e00fc3ee8be8532d79dfe3fd: Status 404 returned error can't find the container with id 2fdc603e15c5d997cef74281b893cb4c6b518e53e00fc3ee8be8532d79dfe3fd Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.461308 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.471169 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.502525 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.924260 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" event={"ID":"c9f7a117-80d9-4da7-a3e9-469976254cb9","Type":"ContainerStarted","Data":"2fdc603e15c5d997cef74281b893cb4c6b518e53e00fc3ee8be8532d79dfe3fd"} Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.926551 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e26ae36-9e04-454a-b998-fd4a1c83b4d6","Type":"ContainerStarted","Data":"b41bbec2bb21f1a6e790ebd8f8b0b3d76653a027b5c237be0a3f034f5af51ff4"} Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.928254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e259daff-e55a-47d6-b55d-0c63fa1fe468","Type":"ContainerStarted","Data":"854ac225a5109a27bf159a056897eb17ae684addd44a430b8c7ad4a4ff9555a8"} Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.931532 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5","Type":"ContainerStarted","Data":"ccd7b95775479533bcf5c82479c380dfd7cecd975b0afa82c64225889eea100f"} Feb 16 21:15:24 crc kubenswrapper[4811]: I0216 21:15:24.935052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b320aa81-ef47-476e-8bfe-156fc797f12c","Type":"ContainerStarted","Data":"62e779f0896c157914c9d072d323b5972e8884b409020a52913487bd5fb587ce"} Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.007004 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hcp7j"] Feb 16 21:15:25 crc kubenswrapper[4811]: W0216 21:15:25.016729 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2be2a7d3_2469_41ce_a8d4_baf6e58aece5.slice/crio-99584bfe3e9dc1d8d900849d947275d33475a25d869fc76476172a1b9ca4d14f WatchSource:0}: Error finding container 99584bfe3e9dc1d8d900849d947275d33475a25d869fc76476172a1b9ca4d14f: Status 404 returned error can't find the container with id 99584bfe3e9dc1d8d900849d947275d33475a25d869fc76476172a1b9ca4d14f Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.950946 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" event={"ID":"2be2a7d3-2469-41ce-a8d4-baf6e58aece5","Type":"ContainerStarted","Data":"6a42419938614270f85e664c5279f3464f9ac631067acf322dbecc09fd515997"} Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.951279 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" event={"ID":"2be2a7d3-2469-41ce-a8d4-baf6e58aece5","Type":"ContainerStarted","Data":"99584bfe3e9dc1d8d900849d947275d33475a25d869fc76476172a1b9ca4d14f"} Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.955851 4811 generic.go:334] "Generic (PLEG): container finished" podID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerID="a449145a385dddb17233305c61d4f8f8de92f5515aba9d4cc00578a8ada77ce8" exitCode=0 Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.955917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" event={"ID":"c9f7a117-80d9-4da7-a3e9-469976254cb9","Type":"ContainerDied","Data":"a449145a385dddb17233305c61d4f8f8de92f5515aba9d4cc00578a8ada77ce8"} Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.960993 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d8l76" event={"ID":"8793e091-1ee8-417a-86fa-0c22af64bde3","Type":"ContainerStarted","Data":"ba149d7a1af50bcfd429e917e6c06672d3954677d270782eb4fecae0377ca675"} Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.969006 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" podStartSLOduration=1.968987649 podStartE2EDuration="1.968987649s" podCreationTimestamp="2026-02-16 21:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:25.968567688 +0000 UTC m=+1143.897863646" watchObservedRunningTime="2026-02-16 21:15:25.968987649 +0000 UTC m=+1143.898283597" Feb 16 21:15:25 crc kubenswrapper[4811]: I0216 21:15:25.991330 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-d8l76" podStartSLOduration=3.99131254 podStartE2EDuration="3.99131254s" podCreationTimestamp="2026-02-16 21:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:25.986130257 +0000 UTC m=+1143.915426205" watchObservedRunningTime="2026-02-16 21:15:25.99131254 +0000 UTC m=+1143.920608488" Feb 16 21:15:26 crc kubenswrapper[4811]: I0216 21:15:26.848498 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:26 crc kubenswrapper[4811]: I0216 21:15:26.867432 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.007173 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e259daff-e55a-47d6-b55d-0c63fa1fe468","Type":"ContainerStarted","Data":"53b5d3f8756c1c220a34c53d89a9cd4b0f114769e6edaeb7cc6afdc07c6e63f7"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.007654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e259daff-e55a-47d6-b55d-0c63fa1fe468","Type":"ContainerStarted","Data":"241ace9234c895fbde5de6da0dd2ee18f6b33a503ebb72262ea0bc0d1f76a9c8"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.009337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5","Type":"ContainerStarted","Data":"27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.009480 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce" gracePeriod=30 Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.012243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b320aa81-ef47-476e-8bfe-156fc797f12c","Type":"ContainerStarted","Data":"46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.016754 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" event={"ID":"c9f7a117-80d9-4da7-a3e9-469976254cb9","Type":"ContainerStarted","Data":"07a87b22a6879d1a02509d9a533f78da88a76452f2b0d8ec7fd2d71532311a1c"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.017299 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.022876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e26ae36-9e04-454a-b998-fd4a1c83b4d6","Type":"ContainerStarted","Data":"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.022924 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e26ae36-9e04-454a-b998-fd4a1c83b4d6","Type":"ContainerStarted","Data":"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93"} Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.023038 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-log" containerID="cri-o://e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93" gracePeriod=30 Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.023266 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-metadata" containerID="cri-o://6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d" gracePeriod=30 Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.034887 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8967706939999998 podStartE2EDuration="6.034863592s" podCreationTimestamp="2026-02-16 21:15:23 +0000 UTC" firstStartedPulling="2026-02-16 21:15:24.486809844 +0000 UTC m=+1142.416105782" lastFinishedPulling="2026-02-16 21:15:27.624902742 +0000 UTC m=+1145.554198680" observedRunningTime="2026-02-16 21:15:29.029743411 +0000 UTC m=+1146.959039349" watchObservedRunningTime="2026-02-16 21:15:29.034863592 +0000 UTC m=+1146.964159540" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.062636 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.918655953 podStartE2EDuration="6.062612911s" podCreationTimestamp="2026-02-16 21:15:23 +0000 UTC" firstStartedPulling="2026-02-16 21:15:24.47335416 +0000 UTC m=+1142.402650098" lastFinishedPulling="2026-02-16 21:15:27.617311118 +0000 UTC m=+1145.546607056" observedRunningTime="2026-02-16 21:15:29.048764497 +0000 UTC m=+1146.978060445" watchObservedRunningTime="2026-02-16 21:15:29.062612911 +0000 UTC m=+1146.991908849" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.080337 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.627504295 podStartE2EDuration="7.080312203s" podCreationTimestamp="2026-02-16 21:15:22 +0000 UTC" firstStartedPulling="2026-02-16 21:15:24.160319913 +0000 UTC m=+1142.089615851" lastFinishedPulling="2026-02-16 21:15:27.613127821 +0000 UTC m=+1145.542423759" observedRunningTime="2026-02-16 21:15:29.069346833 +0000 UTC m=+1146.998642771" watchObservedRunningTime="2026-02-16 21:15:29.080312203 +0000 UTC m=+1147.009608141" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.096972 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" podStartSLOduration=6.096950658 podStartE2EDuration="6.096950658s" podCreationTimestamp="2026-02-16 21:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:29.091571371 +0000 UTC m=+1147.020867309" watchObservedRunningTime="2026-02-16 21:15:29.096950658 +0000 UTC m=+1147.026246616" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.136479 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.4972070459999998 podStartE2EDuration="7.136451397s" podCreationTimestamp="2026-02-16 21:15:22 +0000 UTC" firstStartedPulling="2026-02-16 21:15:23.97897019 +0000 UTC m=+1141.908266128" lastFinishedPulling="2026-02-16 21:15:27.618214541 +0000 UTC m=+1145.547510479" observedRunningTime="2026-02-16 21:15:29.107347674 +0000 UTC m=+1147.036643642" watchObservedRunningTime="2026-02-16 21:15:29.136451397 +0000 UTC m=+1147.065747345" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.665099 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.742795 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhrf\" (UniqueName: \"kubernetes.io/projected/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-kube-api-access-hbhrf\") pod \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.742846 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-config-data\") pod \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.742894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-logs\") pod \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.743100 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-combined-ca-bundle\") pod \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\" (UID: \"1e26ae36-9e04-454a-b998-fd4a1c83b4d6\") " Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.743500 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-logs" (OuterVolumeSpecName: "logs") pod "1e26ae36-9e04-454a-b998-fd4a1c83b4d6" (UID: "1e26ae36-9e04-454a-b998-fd4a1c83b4d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.743601 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.758513 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-kube-api-access-hbhrf" (OuterVolumeSpecName: "kube-api-access-hbhrf") pod "1e26ae36-9e04-454a-b998-fd4a1c83b4d6" (UID: "1e26ae36-9e04-454a-b998-fd4a1c83b4d6"). InnerVolumeSpecName "kube-api-access-hbhrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.774666 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-config-data" (OuterVolumeSpecName: "config-data") pod "1e26ae36-9e04-454a-b998-fd4a1c83b4d6" (UID: "1e26ae36-9e04-454a-b998-fd4a1c83b4d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.775756 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e26ae36-9e04-454a-b998-fd4a1c83b4d6" (UID: "1e26ae36-9e04-454a-b998-fd4a1c83b4d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.845700 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhrf\" (UniqueName: \"kubernetes.io/projected/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-kube-api-access-hbhrf\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.846229 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4811]: I0216 21:15:29.846302 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e26ae36-9e04-454a-b998-fd4a1c83b4d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.036169 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerID="6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d" exitCode=0 Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.036214 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerID="e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93" exitCode=143 Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.037285 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.039822 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e26ae36-9e04-454a-b998-fd4a1c83b4d6","Type":"ContainerDied","Data":"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d"} Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.039876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e26ae36-9e04-454a-b998-fd4a1c83b4d6","Type":"ContainerDied","Data":"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93"} Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.039892 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e26ae36-9e04-454a-b998-fd4a1c83b4d6","Type":"ContainerDied","Data":"b41bbec2bb21f1a6e790ebd8f8b0b3d76653a027b5c237be0a3f034f5af51ff4"} Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.039907 4811 scope.go:117] "RemoveContainer" containerID="6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.077672 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.091650 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.093642 4811 scope.go:117] "RemoveContainer" containerID="e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.106579 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:30 crc kubenswrapper[4811]: E0216 21:15:30.107119 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-log" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.107143 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-log" Feb 16 21:15:30 crc kubenswrapper[4811]: E0216 21:15:30.107165 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-metadata" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.107173 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-metadata" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.108336 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-metadata" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.108372 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" containerName="nova-metadata-log" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.109841 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.113594 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.113871 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.132454 4811 scope.go:117] "RemoveContainer" containerID="6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d" Feb 16 21:15:30 crc kubenswrapper[4811]: E0216 21:15:30.133082 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d\": container with ID starting with 6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d not found: ID does not exist" containerID="6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.133119 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d"} err="failed to get container status \"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d\": rpc error: code = NotFound desc = could not find container \"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d\": container with ID starting with 6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d not found: ID does not exist" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.133144 4811 scope.go:117] "RemoveContainer" containerID="e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93" Feb 16 21:15:30 crc kubenswrapper[4811]: E0216 21:15:30.136551 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93\": container with ID starting with e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93 not found: ID does not exist" containerID="e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.136619 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93"} err="failed to get container status \"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93\": rpc error: code = NotFound desc = could not find container \"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93\": container with ID starting with e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93 not found: ID does not exist" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.136675 4811 scope.go:117] "RemoveContainer" containerID="6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.139080 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.139182 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d"} err="failed to get container status \"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d\": rpc error: code = NotFound desc = could not find container \"6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d\": container with ID starting with 6669bd5a7683703313b643ab748ef0ae70af690b8a93199863c72a240823d68d not found: ID does not exist" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.139221 4811 scope.go:117] "RemoveContainer" containerID="e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.145020 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93"} err="failed to get container status \"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93\": rpc error: code = NotFound desc = could not find container \"e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93\": container with ID starting with e32890cc19b738cce37babd3008ed8d50dd1166e3b4529733bdccf976db64b93 not found: ID does not exist" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.255401 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6r9v\" (UniqueName: \"kubernetes.io/projected/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-kube-api-access-n6r9v\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.255504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.255620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.255652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-config-data\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.255693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-logs\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.358189 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6r9v\" (UniqueName: \"kubernetes.io/projected/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-kube-api-access-n6r9v\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.358286 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.358340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.358361 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-config-data\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.358406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-logs\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.358831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-logs\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.364479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-config-data\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.364709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.365334 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.380604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6r9v\" (UniqueName: \"kubernetes.io/projected/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-kube-api-access-n6r9v\") pod \"nova-metadata-0\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.444699 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.716314 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e26ae36-9e04-454a-b998-fd4a1c83b4d6" path="/var/lib/kubelet/pods/1e26ae36-9e04-454a-b998-fd4a1c83b4d6/volumes" Feb 16 21:15:30 crc kubenswrapper[4811]: I0216 21:15:30.944373 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:31 crc kubenswrapper[4811]: I0216 21:15:31.046539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc","Type":"ContainerStarted","Data":"2cf4ced22ff1008b3d5f330391934f18f1eae6e611509e21439a89c7a7f18686"} Feb 16 21:15:32 crc kubenswrapper[4811]: I0216 21:15:32.060098 4811 generic.go:334] "Generic (PLEG): container finished" podID="2be2a7d3-2469-41ce-a8d4-baf6e58aece5" containerID="6a42419938614270f85e664c5279f3464f9ac631067acf322dbecc09fd515997" exitCode=0 Feb 16 21:15:32 crc kubenswrapper[4811]: I0216 21:15:32.060244 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" event={"ID":"2be2a7d3-2469-41ce-a8d4-baf6e58aece5","Type":"ContainerDied","Data":"6a42419938614270f85e664c5279f3464f9ac631067acf322dbecc09fd515997"} Feb 16 21:15:32 crc kubenswrapper[4811]: I0216 21:15:32.064799 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc","Type":"ContainerStarted","Data":"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2"} Feb 16 21:15:32 crc kubenswrapper[4811]: I0216 21:15:32.064869 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc","Type":"ContainerStarted","Data":"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e"} Feb 16 21:15:32 crc kubenswrapper[4811]: I0216 21:15:32.129006 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.128953816 podStartE2EDuration="2.128953816s" podCreationTimestamp="2026-02-16 21:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:32.104799999 +0000 UTC m=+1150.034095947" watchObservedRunningTime="2026-02-16 21:15:32.128953816 +0000 UTC m=+1150.058249784" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.082933 4811 generic.go:334] "Generic (PLEG): container finished" podID="8793e091-1ee8-417a-86fa-0c22af64bde3" containerID="ba149d7a1af50bcfd429e917e6c06672d3954677d270782eb4fecae0377ca675" exitCode=0 Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.083038 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d8l76" event={"ID":"8793e091-1ee8-417a-86fa-0c22af64bde3","Type":"ContainerDied","Data":"ba149d7a1af50bcfd429e917e6c06672d3954677d270782eb4fecae0377ca675"} Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.362858 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.363095 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.409902 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.599288 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.599335 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.610613 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.620624 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.684679 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.701633 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wk5kr"] Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.701864 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerName="dnsmasq-dns" containerID="cri-o://fc8bb9f355be0845136116f1c4060f71f870aeb595c37d9d537aa11a5d87a3f6" gracePeriod=10 Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.759792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-config-data\") pod \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.759847 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2sz6\" (UniqueName: \"kubernetes.io/projected/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-kube-api-access-m2sz6\") pod \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.759947 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-combined-ca-bundle\") pod \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.760023 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-scripts\") pod \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\" (UID: \"2be2a7d3-2469-41ce-a8d4-baf6e58aece5\") " Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.768004 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-kube-api-access-m2sz6" (OuterVolumeSpecName: "kube-api-access-m2sz6") pod "2be2a7d3-2469-41ce-a8d4-baf6e58aece5" (UID: "2be2a7d3-2469-41ce-a8d4-baf6e58aece5"). InnerVolumeSpecName "kube-api-access-m2sz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.772303 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-scripts" (OuterVolumeSpecName: "scripts") pod "2be2a7d3-2469-41ce-a8d4-baf6e58aece5" (UID: "2be2a7d3-2469-41ce-a8d4-baf6e58aece5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.791324 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-config-data" (OuterVolumeSpecName: "config-data") pod "2be2a7d3-2469-41ce-a8d4-baf6e58aece5" (UID: "2be2a7d3-2469-41ce-a8d4-baf6e58aece5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.806437 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2be2a7d3-2469-41ce-a8d4-baf6e58aece5" (UID: "2be2a7d3-2469-41ce-a8d4-baf6e58aece5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.863250 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.863281 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.863290 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:33 crc kubenswrapper[4811]: I0216 21:15:33.863299 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2sz6\" (UniqueName: \"kubernetes.io/projected/2be2a7d3-2469-41ce-a8d4-baf6e58aece5-kube-api-access-m2sz6\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.097178 4811 generic.go:334] "Generic (PLEG): container finished" podID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerID="fc8bb9f355be0845136116f1c4060f71f870aeb595c37d9d537aa11a5d87a3f6" exitCode=0 Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.097221 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" event={"ID":"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8","Type":"ContainerDied","Data":"fc8bb9f355be0845136116f1c4060f71f870aeb595c37d9d537aa11a5d87a3f6"} Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.099851 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" event={"ID":"2be2a7d3-2469-41ce-a8d4-baf6e58aece5","Type":"ContainerDied","Data":"99584bfe3e9dc1d8d900849d947275d33475a25d869fc76476172a1b9ca4d14f"} Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.099873 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99584bfe3e9dc1d8d900849d947275d33475a25d869fc76476172a1b9ca4d14f" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.099912 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hcp7j" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.186750 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:15:34 crc kubenswrapper[4811]: E0216 21:15:34.189169 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be2a7d3-2469-41ce-a8d4-baf6e58aece5" containerName="nova-cell1-conductor-db-sync" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.189265 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be2a7d3-2469-41ce-a8d4-baf6e58aece5" containerName="nova-cell1-conductor-db-sync" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.189540 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be2a7d3-2469-41ce-a8d4-baf6e58aece5" containerName="nova-cell1-conductor-db-sync" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.191215 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.191358 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.214226 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.217210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.258561 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.274379 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6g5s\" (UniqueName: \"kubernetes.io/projected/f13872df-22e7-4ca1-8b4d-3235e5265f5e-kube-api-access-l6g5s\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.274507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13872df-22e7-4ca1-8b4d-3235e5265f5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.274566 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13872df-22e7-4ca1-8b4d-3235e5265f5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.377873 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378040 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-config\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378121 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-sb\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378236 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j855z\" (UniqueName: \"kubernetes.io/projected/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-kube-api-access-j855z\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-nb\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378312 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-svc\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378548 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13872df-22e7-4ca1-8b4d-3235e5265f5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378620 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6g5s\" (UniqueName: \"kubernetes.io/projected/f13872df-22e7-4ca1-8b4d-3235e5265f5e-kube-api-access-l6g5s\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.378716 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13872df-22e7-4ca1-8b4d-3235e5265f5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.398330 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13872df-22e7-4ca1-8b4d-3235e5265f5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.398758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-kube-api-access-j855z" (OuterVolumeSpecName: "kube-api-access-j855z") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "kube-api-access-j855z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.401744 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13872df-22e7-4ca1-8b4d-3235e5265f5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.409772 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6g5s\" (UniqueName: \"kubernetes.io/projected/f13872df-22e7-4ca1-8b4d-3235e5265f5e-kube-api-access-l6g5s\") pod \"nova-cell1-conductor-0\" (UID: \"f13872df-22e7-4ca1-8b4d-3235e5265f5e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.441798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.472636 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-config" (OuterVolumeSpecName: "config") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.476283 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.480554 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.480881 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0\") pod \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\" (UID: \"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8\") " Feb 16 21:15:34 crc kubenswrapper[4811]: W0216 21:15:34.481002 4811 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.481012 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.484272 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.484294 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.484303 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j855z\" (UniqueName: \"kubernetes.io/projected/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-kube-api-access-j855z\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.484313 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.484322 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.502025 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" (UID: "b1f571c4-4cc9-417d-8fe3-84cf4dba83a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.557143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.590474 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.656741 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.683062 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.683306 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:15:34 crc kubenswrapper[4811]: E0216 21:15:34.709094 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.793035 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-scripts\") pod \"8793e091-1ee8-417a-86fa-0c22af64bde3\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.793151 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twjhp\" (UniqueName: \"kubernetes.io/projected/8793e091-1ee8-417a-86fa-0c22af64bde3-kube-api-access-twjhp\") pod \"8793e091-1ee8-417a-86fa-0c22af64bde3\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.793299 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-config-data\") pod \"8793e091-1ee8-417a-86fa-0c22af64bde3\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.793343 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-combined-ca-bundle\") pod \"8793e091-1ee8-417a-86fa-0c22af64bde3\" (UID: \"8793e091-1ee8-417a-86fa-0c22af64bde3\") " Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.797188 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-scripts" (OuterVolumeSpecName: "scripts") pod "8793e091-1ee8-417a-86fa-0c22af64bde3" (UID: "8793e091-1ee8-417a-86fa-0c22af64bde3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.797796 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8793e091-1ee8-417a-86fa-0c22af64bde3-kube-api-access-twjhp" (OuterVolumeSpecName: "kube-api-access-twjhp") pod "8793e091-1ee8-417a-86fa-0c22af64bde3" (UID: "8793e091-1ee8-417a-86fa-0c22af64bde3"). InnerVolumeSpecName "kube-api-access-twjhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.828463 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8793e091-1ee8-417a-86fa-0c22af64bde3" (UID: "8793e091-1ee8-417a-86fa-0c22af64bde3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.831008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-config-data" (OuterVolumeSpecName: "config-data") pod "8793e091-1ee8-417a-86fa-0c22af64bde3" (UID: "8793e091-1ee8-417a-86fa-0c22af64bde3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.895978 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twjhp\" (UniqueName: \"kubernetes.io/projected/8793e091-1ee8-417a-86fa-0c22af64bde3-kube-api-access-twjhp\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.896018 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.896033 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4811]: I0216 21:15:34.896047 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8793e091-1ee8-417a-86fa-0c22af64bde3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.004569 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.112941 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d8l76" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.112950 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d8l76" event={"ID":"8793e091-1ee8-417a-86fa-0c22af64bde3","Type":"ContainerDied","Data":"336af0063a2a5f64c6209dac0688d01bd40c2956e48d08f649b3bf66896b176e"} Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.113519 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336af0063a2a5f64c6209dac0688d01bd40c2956e48d08f649b3bf66896b176e" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.114958 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" event={"ID":"b1f571c4-4cc9-417d-8fe3-84cf4dba83a8","Type":"ContainerDied","Data":"217a298d933f5f9099f5d21bc93d21766101c1e15a7e43d74bae3897ae0633d7"} Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.114994 4811 scope.go:117] "RemoveContainer" containerID="fc8bb9f355be0845136116f1c4060f71f870aeb595c37d9d537aa11a5d87a3f6" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.115044 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wk5kr" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.118384 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f13872df-22e7-4ca1-8b4d-3235e5265f5e","Type":"ContainerStarted","Data":"d04a7e1f2a7e33a2a363a3e64905e5eaf19a20b0e534c3ba0a94f44fd8b6cb6d"} Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.169008 4811 scope.go:117] "RemoveContainer" containerID="c5765e2b52670de282c9d5c6431131396d6f90c946033e463ec9d39a7fbb25eb" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.174864 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wk5kr"] Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.185725 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wk5kr"] Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.384060 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.384397 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-log" containerID="cri-o://241ace9234c895fbde5de6da0dd2ee18f6b33a503ebb72262ea0bc0d1f76a9c8" gracePeriod=30 Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.384599 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-api" containerID="cri-o://53b5d3f8756c1c220a34c53d89a9cd4b0f114769e6edaeb7cc6afdc07c6e63f7" gracePeriod=30 Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.396776 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.443960 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.444178 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-log" containerID="cri-o://f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e" gracePeriod=30 Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.444575 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-metadata" containerID="cri-o://4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2" gracePeriod=30 Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.444746 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:15:35 crc kubenswrapper[4811]: I0216 21:15:35.444798 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.031452 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.121204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-combined-ca-bundle\") pod \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.121265 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-logs\") pod \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.121411 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-nova-metadata-tls-certs\") pod \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.121881 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6r9v\" (UniqueName: \"kubernetes.io/projected/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-kube-api-access-n6r9v\") pod \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.121902 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-logs" (OuterVolumeSpecName: "logs") pod "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" (UID: "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.122045 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-config-data\") pod \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\" (UID: \"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc\") " Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.123028 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.129285 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-kube-api-access-n6r9v" (OuterVolumeSpecName: "kube-api-access-n6r9v") pod "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" (UID: "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc"). InnerVolumeSpecName "kube-api-access-n6r9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.140650 4811 generic.go:334] "Generic (PLEG): container finished" podID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerID="241ace9234c895fbde5de6da0dd2ee18f6b33a503ebb72262ea0bc0d1f76a9c8" exitCode=143 Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.140742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e259daff-e55a-47d6-b55d-0c63fa1fe468","Type":"ContainerDied","Data":"241ace9234c895fbde5de6da0dd2ee18f6b33a503ebb72262ea0bc0d1f76a9c8"} Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144232 4811 generic.go:334] "Generic (PLEG): container finished" podID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerID="4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2" exitCode=0 Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144285 4811 generic.go:334] "Generic (PLEG): container finished" podID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerID="f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e" exitCode=143 Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144357 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc","Type":"ContainerDied","Data":"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2"} Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc","Type":"ContainerDied","Data":"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e"} Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144400 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc","Type":"ContainerDied","Data":"2cf4ced22ff1008b3d5f330391934f18f1eae6e611509e21439a89c7a7f18686"} Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144444 4811 scope.go:117] "RemoveContainer" containerID="4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.144611 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.148702 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f13872df-22e7-4ca1-8b4d-3235e5265f5e","Type":"ContainerStarted","Data":"9e322cc70d185d5874dfbb3acb57247e7b9ac9e24916af5c7189d2fad10ca095"} Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.148773 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.165090 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" (UID: "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.166322 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-config-data" (OuterVolumeSpecName: "config-data") pod "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" (UID: "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.177794 4811 scope.go:117] "RemoveContainer" containerID="f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.192175 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.192158938 podStartE2EDuration="2.192158938s" podCreationTimestamp="2026-02-16 21:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:36.171829459 +0000 UTC m=+1154.101125417" watchObservedRunningTime="2026-02-16 21:15:36.192158938 +0000 UTC m=+1154.121454876" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.206136 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" (UID: "ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.206184 4811 scope.go:117] "RemoveContainer" containerID="4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2" Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.207479 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2\": container with ID starting with 4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2 not found: ID does not exist" containerID="4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.207534 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2"} err="failed to get container status \"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2\": rpc error: code = NotFound desc = could not find container \"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2\": container with ID starting with 4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2 not found: ID does not exist" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.207555 4811 scope.go:117] "RemoveContainer" containerID="f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e" Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.214049 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e\": container with ID starting with f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e not found: ID does not exist" containerID="f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.214111 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e"} err="failed to get container status \"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e\": rpc error: code = NotFound desc = could not find container \"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e\": container with ID starting with f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e not found: ID does not exist" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.214143 4811 scope.go:117] "RemoveContainer" containerID="4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.214486 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2"} err="failed to get container status \"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2\": rpc error: code = NotFound desc = could not find container \"4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2\": container with ID starting with 4d50936ea98f91c417de1751c22c26fe13b76607bf123451212c4209cccf2ee2 not found: ID does not exist" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.214521 4811 scope.go:117] "RemoveContainer" containerID="f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.214845 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e"} err="failed to get container status \"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e\": rpc error: code = NotFound desc = could not find container \"f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e\": container with ID starting with f63990b5d0b5b651c6859086e8624fbb4813a2f69a9193e883744f803417f38e not found: ID does not exist" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.225004 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.225031 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.225041 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6r9v\" (UniqueName: \"kubernetes.io/projected/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-kube-api-access-n6r9v\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.225053 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.491541 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.498324 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512151 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.512601 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-metadata" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512618 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-metadata" Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.512629 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8793e091-1ee8-417a-86fa-0c22af64bde3" containerName="nova-manage" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512635 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8793e091-1ee8-417a-86fa-0c22af64bde3" containerName="nova-manage" Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.512667 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-log" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512673 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-log" Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.512688 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerName="init" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512696 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerName="init" Feb 16 21:15:36 crc kubenswrapper[4811]: E0216 21:15:36.512719 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerName="dnsmasq-dns" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512727 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerName="dnsmasq-dns" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.512983 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8793e091-1ee8-417a-86fa-0c22af64bde3" containerName="nova-manage" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.513027 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-log" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.513043 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" containerName="nova-metadata-metadata" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.513066 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" containerName="dnsmasq-dns" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.514315 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.517234 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.517425 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.536957 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.632557 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.632641 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d299bddd-235d-4382-8590-1103bf10fbd7-logs\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.632693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-config-data\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.632708 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.632801 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6rr\" (UniqueName: \"kubernetes.io/projected/d299bddd-235d-4382-8590-1103bf10fbd7-kube-api-access-2b6rr\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.734371 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d299bddd-235d-4382-8590-1103bf10fbd7-logs\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.734454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-config-data\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.734474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.734555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b6rr\" (UniqueName: \"kubernetes.io/projected/d299bddd-235d-4382-8590-1103bf10fbd7-kube-api-access-2b6rr\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.734634 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.735637 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d299bddd-235d-4382-8590-1103bf10fbd7-logs\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.737639 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc" path="/var/lib/kubelet/pods/ac840e81-90c6-4b2f-a8dd-c8ced6cb6ffc/volumes" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.738350 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f571c4-4cc9-417d-8fe3-84cf4dba83a8" path="/var/lib/kubelet/pods/b1f571c4-4cc9-417d-8fe3-84cf4dba83a8/volumes" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.739186 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.740165 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.743846 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-config-data\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.757795 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b6rr\" (UniqueName: \"kubernetes.io/projected/d299bddd-235d-4382-8590-1103bf10fbd7-kube-api-access-2b6rr\") pod \"nova-metadata-0\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " pod="openstack/nova-metadata-0" Feb 16 21:15:36 crc kubenswrapper[4811]: I0216 21:15:36.860507 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:15:37 crc kubenswrapper[4811]: I0216 21:15:37.171528 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b320aa81-ef47-476e-8bfe-156fc797f12c" containerName="nova-scheduler-scheduler" containerID="cri-o://46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" gracePeriod=30 Feb 16 21:15:37 crc kubenswrapper[4811]: I0216 21:15:37.395930 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:15:38 crc kubenswrapper[4811]: I0216 21:15:38.189402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d299bddd-235d-4382-8590-1103bf10fbd7","Type":"ContainerStarted","Data":"e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c"} Feb 16 21:15:38 crc kubenswrapper[4811]: I0216 21:15:38.189730 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d299bddd-235d-4382-8590-1103bf10fbd7","Type":"ContainerStarted","Data":"c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3"} Feb 16 21:15:38 crc kubenswrapper[4811]: I0216 21:15:38.189744 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d299bddd-235d-4382-8590-1103bf10fbd7","Type":"ContainerStarted","Data":"17d73598829d8e5668c3af585a0fcb31850822983bcd0309fb3553b36ecf3b8b"} Feb 16 21:15:38 crc kubenswrapper[4811]: I0216 21:15:38.231482 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.231461875 podStartE2EDuration="2.231461875s" podCreationTimestamp="2026-02-16 21:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:38.218383081 +0000 UTC m=+1156.147679019" watchObservedRunningTime="2026-02-16 21:15:38.231461875 +0000 UTC m=+1156.160757813" Feb 16 21:15:38 crc kubenswrapper[4811]: E0216 21:15:38.364925 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 21:15:38 crc kubenswrapper[4811]: E0216 21:15:38.366559 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 21:15:38 crc kubenswrapper[4811]: E0216 21:15:38.367896 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 21:15:38 crc kubenswrapper[4811]: E0216 21:15:38.367938 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b320aa81-ef47-476e-8bfe-156fc797f12c" containerName="nova-scheduler-scheduler" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.179705 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.236354 4811 generic.go:334] "Generic (PLEG): container finished" podID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerID="53b5d3f8756c1c220a34c53d89a9cd4b0f114769e6edaeb7cc6afdc07c6e63f7" exitCode=0 Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.236399 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e259daff-e55a-47d6-b55d-0c63fa1fe468","Type":"ContainerDied","Data":"53b5d3f8756c1c220a34c53d89a9cd4b0f114769e6edaeb7cc6afdc07c6e63f7"} Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.240080 4811 generic.go:334] "Generic (PLEG): container finished" podID="b320aa81-ef47-476e-8bfe-156fc797f12c" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" exitCode=0 Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.240112 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b320aa81-ef47-476e-8bfe-156fc797f12c","Type":"ContainerDied","Data":"46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21"} Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.240138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b320aa81-ef47-476e-8bfe-156fc797f12c","Type":"ContainerDied","Data":"62e779f0896c157914c9d072d323b5972e8884b409020a52913487bd5fb587ce"} Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.240159 4811 scope.go:117] "RemoveContainer" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.240211 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.271438 4811 scope.go:117] "RemoveContainer" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" Feb 16 21:15:40 crc kubenswrapper[4811]: E0216 21:15:40.272140 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21\": container with ID starting with 46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21 not found: ID does not exist" containerID="46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.272167 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21"} err="failed to get container status \"46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21\": rpc error: code = NotFound desc = could not find container \"46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21\": container with ID starting with 46cb8bab1993b7cf67e1280178d88ab8f33b88ed436aa4748268052894df0b21 not found: ID does not exist" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.358313 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmlgd\" (UniqueName: \"kubernetes.io/projected/b320aa81-ef47-476e-8bfe-156fc797f12c-kube-api-access-xmlgd\") pod \"b320aa81-ef47-476e-8bfe-156fc797f12c\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.358440 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-combined-ca-bundle\") pod \"b320aa81-ef47-476e-8bfe-156fc797f12c\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.362327 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-config-data\") pod \"b320aa81-ef47-476e-8bfe-156fc797f12c\" (UID: \"b320aa81-ef47-476e-8bfe-156fc797f12c\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.368007 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b320aa81-ef47-476e-8bfe-156fc797f12c-kube-api-access-xmlgd" (OuterVolumeSpecName: "kube-api-access-xmlgd") pod "b320aa81-ef47-476e-8bfe-156fc797f12c" (UID: "b320aa81-ef47-476e-8bfe-156fc797f12c"). InnerVolumeSpecName "kube-api-access-xmlgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.385185 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.390166 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-config-data" (OuterVolumeSpecName: "config-data") pod "b320aa81-ef47-476e-8bfe-156fc797f12c" (UID: "b320aa81-ef47-476e-8bfe-156fc797f12c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.396374 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b320aa81-ef47-476e-8bfe-156fc797f12c" (UID: "b320aa81-ef47-476e-8bfe-156fc797f12c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.465551 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.465597 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmlgd\" (UniqueName: \"kubernetes.io/projected/b320aa81-ef47-476e-8bfe-156fc797f12c-kube-api-access-xmlgd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.465616 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b320aa81-ef47-476e-8bfe-156fc797f12c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.569507 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-config-data\") pod \"e259daff-e55a-47d6-b55d-0c63fa1fe468\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.569981 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-combined-ca-bundle\") pod \"e259daff-e55a-47d6-b55d-0c63fa1fe468\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.570212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e259daff-e55a-47d6-b55d-0c63fa1fe468-logs\") pod \"e259daff-e55a-47d6-b55d-0c63fa1fe468\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.570255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmnbt\" (UniqueName: \"kubernetes.io/projected/e259daff-e55a-47d6-b55d-0c63fa1fe468-kube-api-access-gmnbt\") pod \"e259daff-e55a-47d6-b55d-0c63fa1fe468\" (UID: \"e259daff-e55a-47d6-b55d-0c63fa1fe468\") " Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.571100 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e259daff-e55a-47d6-b55d-0c63fa1fe468-logs" (OuterVolumeSpecName: "logs") pod "e259daff-e55a-47d6-b55d-0c63fa1fe468" (UID: "e259daff-e55a-47d6-b55d-0c63fa1fe468"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.576300 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.577837 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e259daff-e55a-47d6-b55d-0c63fa1fe468-kube-api-access-gmnbt" (OuterVolumeSpecName: "kube-api-access-gmnbt") pod "e259daff-e55a-47d6-b55d-0c63fa1fe468" (UID: "e259daff-e55a-47d6-b55d-0c63fa1fe468"). InnerVolumeSpecName "kube-api-access-gmnbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.620795 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-config-data" (OuterVolumeSpecName: "config-data") pod "e259daff-e55a-47d6-b55d-0c63fa1fe468" (UID: "e259daff-e55a-47d6-b55d-0c63fa1fe468"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.645420 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.647501 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e259daff-e55a-47d6-b55d-0c63fa1fe468" (UID: "e259daff-e55a-47d6-b55d-0c63fa1fe468"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.659523 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:40 crc kubenswrapper[4811]: E0216 21:15:40.660328 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-api" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.660419 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-api" Feb 16 21:15:40 crc kubenswrapper[4811]: E0216 21:15:40.660459 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-log" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.660473 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-log" Feb 16 21:15:40 crc kubenswrapper[4811]: E0216 21:15:40.660502 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b320aa81-ef47-476e-8bfe-156fc797f12c" containerName="nova-scheduler-scheduler" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.660516 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b320aa81-ef47-476e-8bfe-156fc797f12c" containerName="nova-scheduler-scheduler" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.660848 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b320aa81-ef47-476e-8bfe-156fc797f12c" containerName="nova-scheduler-scheduler" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.660882 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-api" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.660912 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" containerName="nova-api-log" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.662531 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.665185 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.669688 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.673302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lx8\" (UniqueName: \"kubernetes.io/projected/9fa14015-5aeb-49dd-85d6-772ab019e88f-kube-api-access-r6lx8\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.673559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-config-data\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.673784 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.674047 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.674106 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e259daff-e55a-47d6-b55d-0c63fa1fe468-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.674119 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmnbt\" (UniqueName: \"kubernetes.io/projected/e259daff-e55a-47d6-b55d-0c63fa1fe468-kube-api-access-gmnbt\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.674132 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e259daff-e55a-47d6-b55d-0c63fa1fe468-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.715808 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b320aa81-ef47-476e-8bfe-156fc797f12c" path="/var/lib/kubelet/pods/b320aa81-ef47-476e-8bfe-156fc797f12c/volumes" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.775004 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lx8\" (UniqueName: \"kubernetes.io/projected/9fa14015-5aeb-49dd-85d6-772ab019e88f-kube-api-access-r6lx8\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.775085 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-config-data\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.775165 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.778978 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.789603 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-config-data\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:40 crc kubenswrapper[4811]: I0216 21:15:40.800725 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lx8\" (UniqueName: \"kubernetes.io/projected/9fa14015-5aeb-49dd-85d6-772ab019e88f-kube-api-access-r6lx8\") pod \"nova-scheduler-0\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " pod="openstack/nova-scheduler-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.061699 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.253522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e259daff-e55a-47d6-b55d-0c63fa1fe468","Type":"ContainerDied","Data":"854ac225a5109a27bf159a056897eb17ae684addd44a430b8c7ad4a4ff9555a8"} Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.253785 4811 scope.go:117] "RemoveContainer" containerID="53b5d3f8756c1c220a34c53d89a9cd4b0f114769e6edaeb7cc6afdc07c6e63f7" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.253950 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.285222 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.285792 4811 scope.go:117] "RemoveContainer" containerID="241ace9234c895fbde5de6da0dd2ee18f6b33a503ebb72262ea0bc0d1f76a9c8" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.295109 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.320800 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.322529 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.328481 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.350260 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.392023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.392124 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2be791e-5e97-4def-86cb-06759aac69b1-logs\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.392163 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-config-data\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.392230 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2xs\" (UniqueName: \"kubernetes.io/projected/e2be791e-5e97-4def-86cb-06759aac69b1-kube-api-access-6z2xs\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.494550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2be791e-5e97-4def-86cb-06759aac69b1-logs\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.494629 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-config-data\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.494742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2xs\" (UniqueName: \"kubernetes.io/projected/e2be791e-5e97-4def-86cb-06759aac69b1-kube-api-access-6z2xs\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.494805 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.496161 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2be791e-5e97-4def-86cb-06759aac69b1-logs\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.503232 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-config-data\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.504501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.518816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2xs\" (UniqueName: \"kubernetes.io/projected/e2be791e-5e97-4def-86cb-06759aac69b1-kube-api-access-6z2xs\") pod \"nova-api-0\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.570303 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:15:41 crc kubenswrapper[4811]: W0216 21:15:41.571174 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fa14015_5aeb_49dd_85d6_772ab019e88f.slice/crio-7a68bce65c339ae676c99ed415a6f3d60b4dc3066cf8ba36566fd47f37e459eb WatchSource:0}: Error finding container 7a68bce65c339ae676c99ed415a6f3d60b4dc3066cf8ba36566fd47f37e459eb: Status 404 returned error can't find the container with id 7a68bce65c339ae676c99ed415a6f3d60b4dc3066cf8ba36566fd47f37e459eb Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.691685 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.863365 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:15:41 crc kubenswrapper[4811]: I0216 21:15:41.864609 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:15:42 crc kubenswrapper[4811]: W0216 21:15:42.164629 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2be791e_5e97_4def_86cb_06759aac69b1.slice/crio-2b2175e7a07e873674985dede4960dd6dce3915c59ea0779ab6bc9523a9ae3d6 WatchSource:0}: Error finding container 2b2175e7a07e873674985dede4960dd6dce3915c59ea0779ab6bc9523a9ae3d6: Status 404 returned error can't find the container with id 2b2175e7a07e873674985dede4960dd6dce3915c59ea0779ab6bc9523a9ae3d6 Feb 16 21:15:42 crc kubenswrapper[4811]: I0216 21:15:42.167277 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:15:42 crc kubenswrapper[4811]: I0216 21:15:42.270926 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2be791e-5e97-4def-86cb-06759aac69b1","Type":"ContainerStarted","Data":"2b2175e7a07e873674985dede4960dd6dce3915c59ea0779ab6bc9523a9ae3d6"} Feb 16 21:15:42 crc kubenswrapper[4811]: I0216 21:15:42.272603 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fa14015-5aeb-49dd-85d6-772ab019e88f","Type":"ContainerStarted","Data":"1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293"} Feb 16 21:15:42 crc kubenswrapper[4811]: I0216 21:15:42.272670 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fa14015-5aeb-49dd-85d6-772ab019e88f","Type":"ContainerStarted","Data":"7a68bce65c339ae676c99ed415a6f3d60b4dc3066cf8ba36566fd47f37e459eb"} Feb 16 21:15:42 crc kubenswrapper[4811]: I0216 21:15:42.291037 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.291017913 podStartE2EDuration="2.291017913s" podCreationTimestamp="2026-02-16 21:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:42.286447567 +0000 UTC m=+1160.215743515" watchObservedRunningTime="2026-02-16 21:15:42.291017913 +0000 UTC m=+1160.220313851" Feb 16 21:15:42 crc kubenswrapper[4811]: I0216 21:15:42.714823 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e259daff-e55a-47d6-b55d-0c63fa1fe468" path="/var/lib/kubelet/pods/e259daff-e55a-47d6-b55d-0c63fa1fe468/volumes" Feb 16 21:15:43 crc kubenswrapper[4811]: I0216 21:15:43.288814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2be791e-5e97-4def-86cb-06759aac69b1","Type":"ContainerStarted","Data":"0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53"} Feb 16 21:15:43 crc kubenswrapper[4811]: I0216 21:15:43.288907 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2be791e-5e97-4def-86cb-06759aac69b1","Type":"ContainerStarted","Data":"77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7"} Feb 16 21:15:43 crc kubenswrapper[4811]: I0216 21:15:43.327405 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.32738608 podStartE2EDuration="2.32738608s" podCreationTimestamp="2026-02-16 21:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:43.326463756 +0000 UTC m=+1161.255759704" watchObservedRunningTime="2026-02-16 21:15:43.32738608 +0000 UTC m=+1161.256682028" Feb 16 21:15:44 crc kubenswrapper[4811]: I0216 21:15:44.611262 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 21:15:45 crc kubenswrapper[4811]: I0216 21:15:45.470136 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 21:15:46 crc kubenswrapper[4811]: I0216 21:15:46.061983 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:15:46 crc kubenswrapper[4811]: I0216 21:15:46.864035 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:15:46 crc kubenswrapper[4811]: I0216 21:15:46.864415 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:15:47 crc kubenswrapper[4811]: I0216 21:15:47.869383 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:15:47 crc kubenswrapper[4811]: I0216 21:15:47.875410 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:15:48 crc kubenswrapper[4811]: I0216 21:15:48.363686 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:15:48 crc kubenswrapper[4811]: I0216 21:15:48.364021 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:15:48 crc kubenswrapper[4811]: I0216 21:15:48.364071 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:15:48 crc kubenswrapper[4811]: I0216 21:15:48.364975 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5a0cef66cb330788b58ea1a5723377ba1dc93aa2016d4d0b1ec1df645e788ff"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:15:48 crc kubenswrapper[4811]: I0216 21:15:48.365082 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://c5a0cef66cb330788b58ea1a5723377ba1dc93aa2016d4d0b1ec1df645e788ff" gracePeriod=600 Feb 16 21:15:48 crc kubenswrapper[4811]: E0216 21:15:48.703757 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:15:49 crc kubenswrapper[4811]: I0216 21:15:49.346802 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="c5a0cef66cb330788b58ea1a5723377ba1dc93aa2016d4d0b1ec1df645e788ff" exitCode=0 Feb 16 21:15:49 crc kubenswrapper[4811]: I0216 21:15:49.347003 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"c5a0cef66cb330788b58ea1a5723377ba1dc93aa2016d4d0b1ec1df645e788ff"} Feb 16 21:15:49 crc kubenswrapper[4811]: I0216 21:15:49.347065 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"38ec19e15b9324f2ccde21c32410034a04474118800f86b56f7b258842a5727e"} Feb 16 21:15:49 crc kubenswrapper[4811]: I0216 21:15:49.347089 4811 scope.go:117] "RemoveContainer" containerID="aec5c764f743f1a4d04f239fd31aa099d13a84893ba733482b70a62ad8b5e0d2" Feb 16 21:15:49 crc kubenswrapper[4811]: I0216 21:15:49.697297 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:15:49 crc kubenswrapper[4811]: I0216 21:15:49.697834 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" containerName="kube-state-metrics" containerID="cri-o://8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8" gracePeriod=30 Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.294626 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.360386 4811 generic.go:334] "Generic (PLEG): container finished" podID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" containerID="8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8" exitCode=2 Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.360425 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ffc95bb9-a405-4472-9879-f2dc826ffdb9","Type":"ContainerDied","Data":"8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8"} Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.360446 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ffc95bb9-a405-4472-9879-f2dc826ffdb9","Type":"ContainerDied","Data":"a0541629acfdc2b5f0bc379bc515010f8bf31491b0b50b24864cfc0f21dc5705"} Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.360462 4811 scope.go:117] "RemoveContainer" containerID="8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.360563 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.399090 4811 scope.go:117] "RemoveContainer" containerID="8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8" Feb 16 21:15:50 crc kubenswrapper[4811]: E0216 21:15:50.399835 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8\": container with ID starting with 8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8 not found: ID does not exist" containerID="8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.399869 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8"} err="failed to get container status \"8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8\": rpc error: code = NotFound desc = could not find container \"8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8\": container with ID starting with 8a54c9ea6dd3e17f85ee90f94dc31333687c286de589cb7c7f4ba20b9e6c92c8 not found: ID does not exist" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.493894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmxrj\" (UniqueName: \"kubernetes.io/projected/ffc95bb9-a405-4472-9879-f2dc826ffdb9-kube-api-access-tmxrj\") pod \"ffc95bb9-a405-4472-9879-f2dc826ffdb9\" (UID: \"ffc95bb9-a405-4472-9879-f2dc826ffdb9\") " Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.526454 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc95bb9-a405-4472-9879-f2dc826ffdb9-kube-api-access-tmxrj" (OuterVolumeSpecName: "kube-api-access-tmxrj") pod "ffc95bb9-a405-4472-9879-f2dc826ffdb9" (UID: "ffc95bb9-a405-4472-9879-f2dc826ffdb9"). InnerVolumeSpecName "kube-api-access-tmxrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.597935 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmxrj\" (UniqueName: \"kubernetes.io/projected/ffc95bb9-a405-4472-9879-f2dc826ffdb9-kube-api-access-tmxrj\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.731851 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.736724 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.766670 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:15:50 crc kubenswrapper[4811]: E0216 21:15:50.767491 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" containerName="kube-state-metrics" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.767518 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" containerName="kube-state-metrics" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.767753 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" containerName="kube-state-metrics" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.768865 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.770214 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.771793 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.776189 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.903335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.903404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.903790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkpw\" (UniqueName: \"kubernetes.io/projected/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-api-access-dmkpw\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:50 crc kubenswrapper[4811]: I0216 21:15:50.904018 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.006244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkpw\" (UniqueName: \"kubernetes.io/projected/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-api-access-dmkpw\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.006639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.006729 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.007547 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.011016 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.018943 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.023135 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.024509 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkpw\" (UniqueName: \"kubernetes.io/projected/435ef7b8-9bee-4232-89cf-f8fd9ad487a7-kube-api-access-dmkpw\") pod \"kube-state-metrics-0\" (UID: \"435ef7b8-9bee-4232-89cf-f8fd9ad487a7\") " pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.062919 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.092115 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.106428 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.499935 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.670252 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:15:51 crc kubenswrapper[4811]: W0216 21:15:51.672029 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod435ef7b8_9bee_4232_89cf_f8fd9ad487a7.slice/crio-34fa6060e6756bba51b0e9704f755b7752383696223f7410e63d70b94b97d6c7 WatchSource:0}: Error finding container 34fa6060e6756bba51b0e9704f755b7752383696223f7410e63d70b94b97d6c7: Status 404 returned error can't find the container with id 34fa6060e6756bba51b0e9704f755b7752383696223f7410e63d70b94b97d6c7 Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.693143 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.693512 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.960886 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.961123 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-central-agent" containerID="cri-o://9b34c12ca8bdeb37f523ad8b60a74c33fbdd78d587fa290294a69079f9e9dd98" gracePeriod=30 Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.961513 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="proxy-httpd" containerID="cri-o://95c6dd6c62591f720d02ecbdc815e4e595c5c163ad4943220ce0476cabc1fc74" gracePeriod=30 Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.961563 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="sg-core" containerID="cri-o://5f7552948810e33b47a000a4e760bdfecb32c8fa7147e94dd879a087123cc73f" gracePeriod=30 Feb 16 21:15:51 crc kubenswrapper[4811]: I0216 21:15:51.961597 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-notification-agent" containerID="cri-o://04bd0528099bf3be8bb22c23be9806b58d82414134de6f16a51c6607981eae81" gracePeriod=30 Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.399737 4811 generic.go:334] "Generic (PLEG): container finished" podID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerID="95c6dd6c62591f720d02ecbdc815e4e595c5c163ad4943220ce0476cabc1fc74" exitCode=0 Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.400147 4811 generic.go:334] "Generic (PLEG): container finished" podID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerID="5f7552948810e33b47a000a4e760bdfecb32c8fa7147e94dd879a087123cc73f" exitCode=2 Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.400264 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerDied","Data":"95c6dd6c62591f720d02ecbdc815e4e595c5c163ad4943220ce0476cabc1fc74"} Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.400368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerDied","Data":"5f7552948810e33b47a000a4e760bdfecb32c8fa7147e94dd879a087123cc73f"} Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.409269 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"435ef7b8-9bee-4232-89cf-f8fd9ad487a7","Type":"ContainerStarted","Data":"34fa6060e6756bba51b0e9704f755b7752383696223f7410e63d70b94b97d6c7"} Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.409341 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.428613 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.975772364 podStartE2EDuration="2.428597196s" podCreationTimestamp="2026-02-16 21:15:50 +0000 UTC" firstStartedPulling="2026-02-16 21:15:51.678456479 +0000 UTC m=+1169.607752407" lastFinishedPulling="2026-02-16 21:15:52.131281301 +0000 UTC m=+1170.060577239" observedRunningTime="2026-02-16 21:15:52.425032518 +0000 UTC m=+1170.354328456" watchObservedRunningTime="2026-02-16 21:15:52.428597196 +0000 UTC m=+1170.357893134" Feb 16 21:15:52 crc kubenswrapper[4811]: E0216 21:15:52.535257 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e772b95_2fcf_4480_a3a9_926834bd068f.slice/crio-9b34c12ca8bdeb37f523ad8b60a74c33fbdd78d587fa290294a69079f9e9dd98.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.720686 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc95bb9-a405-4472-9879-f2dc826ffdb9" path="/var/lib/kubelet/pods/ffc95bb9-a405-4472-9879-f2dc826ffdb9/volumes" Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.776323 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:15:52 crc kubenswrapper[4811]: I0216 21:15:52.777185 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:15:53 crc kubenswrapper[4811]: I0216 21:15:53.419512 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"435ef7b8-9bee-4232-89cf-f8fd9ad487a7","Type":"ContainerStarted","Data":"893a45e0a765be20c4bb0411e3df01683dd181773f31c63243f30ce2bebdbb19"} Feb 16 21:15:53 crc kubenswrapper[4811]: I0216 21:15:53.425265 4811 generic.go:334] "Generic (PLEG): container finished" podID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerID="9b34c12ca8bdeb37f523ad8b60a74c33fbdd78d587fa290294a69079f9e9dd98" exitCode=0 Feb 16 21:15:53 crc kubenswrapper[4811]: I0216 21:15:53.425326 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerDied","Data":"9b34c12ca8bdeb37f523ad8b60a74c33fbdd78d587fa290294a69079f9e9dd98"} Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.490580 4811 generic.go:334] "Generic (PLEG): container finished" podID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerID="04bd0528099bf3be8bb22c23be9806b58d82414134de6f16a51c6607981eae81" exitCode=0 Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.491427 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerDied","Data":"04bd0528099bf3be8bb22c23be9806b58d82414134de6f16a51c6607981eae81"} Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.711927 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859607 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-sg-core-conf-yaml\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859670 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-combined-ca-bundle\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859725 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pvdf\" (UniqueName: \"kubernetes.io/projected/7e772b95-2fcf-4480-a3a9-926834bd068f-kube-api-access-4pvdf\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859754 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-log-httpd\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859777 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-scripts\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859796 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-config-data\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.859948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-run-httpd\") pod \"7e772b95-2fcf-4480-a3a9-926834bd068f\" (UID: \"7e772b95-2fcf-4480-a3a9-926834bd068f\") " Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.861863 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.862843 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.873498 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e772b95-2fcf-4480-a3a9-926834bd068f-kube-api-access-4pvdf" (OuterVolumeSpecName: "kube-api-access-4pvdf") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "kube-api-access-4pvdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.878432 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-scripts" (OuterVolumeSpecName: "scripts") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.895257 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.958923 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.962266 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.962294 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.962305 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.962317 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pvdf\" (UniqueName: \"kubernetes.io/projected/7e772b95-2fcf-4480-a3a9-926834bd068f-kube-api-access-4pvdf\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.962325 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e772b95-2fcf-4480-a3a9-926834bd068f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:54 crc kubenswrapper[4811]: I0216 21:15:54.962334 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.022961 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-config-data" (OuterVolumeSpecName: "config-data") pod "7e772b95-2fcf-4480-a3a9-926834bd068f" (UID: "7e772b95-2fcf-4480-a3a9-926834bd068f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.063880 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e772b95-2fcf-4480-a3a9-926834bd068f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.505483 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e772b95-2fcf-4480-a3a9-926834bd068f","Type":"ContainerDied","Data":"2bfbfa0291c97ec2644da6143067a7a88be9f444c629137d7872989959f7c351"} Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.505536 4811 scope.go:117] "RemoveContainer" containerID="95c6dd6c62591f720d02ecbdc815e4e595c5c163ad4943220ce0476cabc1fc74" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.505550 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.537927 4811 scope.go:117] "RemoveContainer" containerID="5f7552948810e33b47a000a4e760bdfecb32c8fa7147e94dd879a087123cc73f" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.562670 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.576289 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.583531 4811 scope.go:117] "RemoveContainer" containerID="04bd0528099bf3be8bb22c23be9806b58d82414134de6f16a51c6607981eae81" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.587427 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:55 crc kubenswrapper[4811]: E0216 21:15:55.587893 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="proxy-httpd" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.587915 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="proxy-httpd" Feb 16 21:15:55 crc kubenswrapper[4811]: E0216 21:15:55.587942 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="sg-core" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.587952 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="sg-core" Feb 16 21:15:55 crc kubenswrapper[4811]: E0216 21:15:55.587971 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-central-agent" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.587980 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-central-agent" Feb 16 21:15:55 crc kubenswrapper[4811]: E0216 21:15:55.588020 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-notification-agent" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.588029 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-notification-agent" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.588267 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="sg-core" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.588284 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="proxy-httpd" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.588305 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-notification-agent" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.588320 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" containerName="ceilometer-central-agent" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.591086 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.593794 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.594012 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.598879 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.604637 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.621032 4811 scope.go:117] "RemoveContainer" containerID="9b34c12ca8bdeb37f523ad8b60a74c33fbdd78d587fa290294a69079f9e9dd98" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.674696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-scripts\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.674760 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.674842 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-config-data\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.674921 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-run-httpd\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.674960 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5scxc\" (UniqueName: \"kubernetes.io/projected/302bf249-51e9-4271-915a-71bbc20c6d4e-kube-api-access-5scxc\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.675013 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-log-httpd\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.675063 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.675082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.776860 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-scripts\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.776965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.777041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-config-data\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.777126 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-run-httpd\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.777208 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5scxc\" (UniqueName: \"kubernetes.io/projected/302bf249-51e9-4271-915a-71bbc20c6d4e-kube-api-access-5scxc\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.777321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-log-httpd\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.777395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.777449 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.787665 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-log-httpd\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.787978 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-run-httpd\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.788481 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-scripts\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.789383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.790664 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.792996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.793623 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-config-data\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.809390 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5scxc\" (UniqueName: \"kubernetes.io/projected/302bf249-51e9-4271-915a-71bbc20c6d4e-kube-api-access-5scxc\") pod \"ceilometer-0\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " pod="openstack/ceilometer-0" Feb 16 21:15:55 crc kubenswrapper[4811]: I0216 21:15:55.925298 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:56 crc kubenswrapper[4811]: I0216 21:15:56.415905 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:56 crc kubenswrapper[4811]: I0216 21:15:56.524499 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerStarted","Data":"92deb0921ae0daf1af99f56184834fa9df9d50081d2554fbec6cbd9fedf8b505"} Feb 16 21:15:56 crc kubenswrapper[4811]: I0216 21:15:56.714512 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e772b95-2fcf-4480-a3a9-926834bd068f" path="/var/lib/kubelet/pods/7e772b95-2fcf-4480-a3a9-926834bd068f/volumes" Feb 16 21:15:56 crc kubenswrapper[4811]: I0216 21:15:56.875358 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:15:56 crc kubenswrapper[4811]: I0216 21:15:56.876899 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:15:56 crc kubenswrapper[4811]: I0216 21:15:56.891459 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:15:57 crc kubenswrapper[4811]: I0216 21:15:57.538977 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerStarted","Data":"c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4"} Feb 16 21:15:57 crc kubenswrapper[4811]: I0216 21:15:57.553882 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:15:58 crc kubenswrapper[4811]: I0216 21:15:58.551490 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerStarted","Data":"514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc"} Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.428955 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.563260 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerStarted","Data":"c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2"} Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.566564 4811 generic.go:334] "Generic (PLEG): container finished" podID="551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" containerID="27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce" exitCode=137 Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.567560 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.567699 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5","Type":"ContainerDied","Data":"27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce"} Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.567729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5","Type":"ContainerDied","Data":"ccd7b95775479533bcf5c82479c380dfd7cecd975b0afa82c64225889eea100f"} Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.567746 4811 scope.go:117] "RemoveContainer" containerID="27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.569251 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgkr9\" (UniqueName: \"kubernetes.io/projected/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-kube-api-access-wgkr9\") pod \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.569311 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-config-data\") pod \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.569406 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-combined-ca-bundle\") pod \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\" (UID: \"551b3f79-c4bc-46e0-8c10-ec86c30ec6d5\") " Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.589300 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-kube-api-access-wgkr9" (OuterVolumeSpecName: "kube-api-access-wgkr9") pod "551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" (UID: "551b3f79-c4bc-46e0-8c10-ec86c30ec6d5"). InnerVolumeSpecName "kube-api-access-wgkr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.602337 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" (UID: "551b3f79-c4bc-46e0-8c10-ec86c30ec6d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.617756 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-config-data" (OuterVolumeSpecName: "config-data") pod "551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" (UID: "551b3f79-c4bc-46e0-8c10-ec86c30ec6d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.672545 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.672771 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgkr9\" (UniqueName: \"kubernetes.io/projected/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-kube-api-access-wgkr9\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.672783 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.723375 4811 scope.go:117] "RemoveContainer" containerID="27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce" Feb 16 21:15:59 crc kubenswrapper[4811]: E0216 21:15:59.723570 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:15:59 crc kubenswrapper[4811]: E0216 21:15:59.725104 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce\": container with ID starting with 27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce not found: ID does not exist" containerID="27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.725138 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce"} err="failed to get container status \"27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce\": rpc error: code = NotFound desc = could not find container \"27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce\": container with ID starting with 27647711d95f31f6de815a4e30e19d291a3950473b3c2fe14f3099ad08b620ce not found: ID does not exist" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.916433 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.946775 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.958999 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:59 crc kubenswrapper[4811]: E0216 21:15:59.959795 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.959825 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.960230 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.981034 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.983972 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.986861 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.987385 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:15:59 crc kubenswrapper[4811]: I0216 21:15:59.993253 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.082780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.082993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.083395 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.083576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s828\" (UniqueName: \"kubernetes.io/projected/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-kube-api-access-6s828\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.083643 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.186035 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.186237 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.186457 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.186574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s828\" (UniqueName: \"kubernetes.io/projected/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-kube-api-access-6s828\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.186790 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.189488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.189838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.192394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.199800 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.209676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s828\" (UniqueName: \"kubernetes.io/projected/8e9b4d89-6e8c-48b7-8fb7-d21aea07d506-kube-api-access-6s828\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.311935 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.713552 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551b3f79-c4bc-46e0-8c10-ec86c30ec6d5" path="/var/lib/kubelet/pods/551b3f79-c4bc-46e0-8c10-ec86c30ec6d5/volumes" Feb 16 21:16:00 crc kubenswrapper[4811]: I0216 21:16:00.795820 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.104255 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.587560 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506","Type":"ContainerStarted","Data":"8e9d26a6dfb5e62a4c105d0cc7b19e0521f907acd4ecfb23b895a836796b092b"} Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.587834 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8e9b4d89-6e8c-48b7-8fb7-d21aea07d506","Type":"ContainerStarted","Data":"0e2937d65b2558467268b93c0ff450a704701a396870de0e06deb944c1da162c"} Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.591263 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerStarted","Data":"10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813"} Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.591431 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.615770 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.615747157 podStartE2EDuration="2.615747157s" podCreationTimestamp="2026-02-16 21:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:01.604685043 +0000 UTC m=+1179.533981001" watchObservedRunningTime="2026-02-16 21:16:01.615747157 +0000 UTC m=+1179.545043095" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.625685 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.614830369 podStartE2EDuration="6.625667112s" podCreationTimestamp="2026-02-16 21:15:55 +0000 UTC" firstStartedPulling="2026-02-16 21:15:56.39414479 +0000 UTC m=+1174.323440728" lastFinishedPulling="2026-02-16 21:16:00.404981533 +0000 UTC m=+1178.334277471" observedRunningTime="2026-02-16 21:16:01.623447767 +0000 UTC m=+1179.552743705" watchObservedRunningTime="2026-02-16 21:16:01.625667112 +0000 UTC m=+1179.554963050" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.701349 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.701923 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.703526 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:16:01 crc kubenswrapper[4811]: I0216 21:16:01.712402 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.611677 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.629827 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.839597 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kzzxx"] Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.846762 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.855880 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kzzxx"] Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.945245 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-config\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.945307 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.945347 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57v28\" (UniqueName: \"kubernetes.io/projected/6f623a0b-500d-4215-b574-3f4f8234fd64-kube-api-access-57v28\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.945388 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.945417 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:02 crc kubenswrapper[4811]: I0216 21:16:02.945489 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.052409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-config\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.052802 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.052850 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57v28\" (UniqueName: \"kubernetes.io/projected/6f623a0b-500d-4215-b574-3f4f8234fd64-kube-api-access-57v28\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.052890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.052919 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.053013 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.053678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.053778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.054233 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-config\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.054598 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.054737 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f623a0b-500d-4215-b574-3f4f8234fd64-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.089313 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57v28\" (UniqueName: \"kubernetes.io/projected/6f623a0b-500d-4215-b574-3f4f8234fd64-kube-api-access-57v28\") pod \"dnsmasq-dns-cd5cbd7b9-kzzxx\" (UID: \"6f623a0b-500d-4215-b574-3f4f8234fd64\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.182504 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:03 crc kubenswrapper[4811]: I0216 21:16:03.652738 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kzzxx"] Feb 16 21:16:03 crc kubenswrapper[4811]: W0216 21:16:03.654345 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f623a0b_500d_4215_b574_3f4f8234fd64.slice/crio-41d8c3b8b9e945971472bc2300cf86126aee2a58f7369a3b2f7942aee44f39a9 WatchSource:0}: Error finding container 41d8c3b8b9e945971472bc2300cf86126aee2a58f7369a3b2f7942aee44f39a9: Status 404 returned error can't find the container with id 41d8c3b8b9e945971472bc2300cf86126aee2a58f7369a3b2f7942aee44f39a9 Feb 16 21:16:04 crc kubenswrapper[4811]: I0216 21:16:04.630778 4811 generic.go:334] "Generic (PLEG): container finished" podID="6f623a0b-500d-4215-b574-3f4f8234fd64" containerID="2c1bb0a434a29e021d8ca4f6470bbd2c447546978d30eac24f8895f3cd94d1d4" exitCode=0 Feb 16 21:16:04 crc kubenswrapper[4811]: I0216 21:16:04.630838 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" event={"ID":"6f623a0b-500d-4215-b574-3f4f8234fd64","Type":"ContainerDied","Data":"2c1bb0a434a29e021d8ca4f6470bbd2c447546978d30eac24f8895f3cd94d1d4"} Feb 16 21:16:04 crc kubenswrapper[4811]: I0216 21:16:04.631179 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" event={"ID":"6f623a0b-500d-4215-b574-3f4f8234fd64","Type":"ContainerStarted","Data":"41d8c3b8b9e945971472bc2300cf86126aee2a58f7369a3b2f7942aee44f39a9"} Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.270379 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.312653 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.641365 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" event={"ID":"6f623a0b-500d-4215-b574-3f4f8234fd64","Type":"ContainerStarted","Data":"ce77113ac09f348ac45a8c9e114436dcc02c51f70f5d913d2c859a4c84f50d2b"} Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.641443 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-log" containerID="cri-o://77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7" gracePeriod=30 Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.641508 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.641536 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-api" containerID="cri-o://0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53" gracePeriod=30 Feb 16 21:16:05 crc kubenswrapper[4811]: I0216 21:16:05.668296 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" podStartSLOduration=3.668275441 podStartE2EDuration="3.668275441s" podCreationTimestamp="2026-02-16 21:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:05.660733044 +0000 UTC m=+1183.590028972" watchObservedRunningTime="2026-02-16 21:16:05.668275441 +0000 UTC m=+1183.597571379" Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.077566 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.077858 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-central-agent" containerID="cri-o://c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4" gracePeriod=30 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.078035 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="proxy-httpd" containerID="cri-o://10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813" gracePeriod=30 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.078086 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-notification-agent" containerID="cri-o://514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc" gracePeriod=30 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.078124 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="sg-core" containerID="cri-o://c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2" gracePeriod=30 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.656603 4811 generic.go:334] "Generic (PLEG): container finished" podID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerID="10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813" exitCode=0 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.656945 4811 generic.go:334] "Generic (PLEG): container finished" podID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerID="c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2" exitCode=2 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.656956 4811 generic.go:334] "Generic (PLEG): container finished" podID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerID="c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4" exitCode=0 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.656671 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerDied","Data":"10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813"} Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.657049 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerDied","Data":"c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2"} Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.657084 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerDied","Data":"c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4"} Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.659518 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2be791e-5e97-4def-86cb-06759aac69b1" containerID="77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7" exitCode=143 Feb 16 21:16:06 crc kubenswrapper[4811]: I0216 21:16:06.659593 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2be791e-5e97-4def-86cb-06759aac69b1","Type":"ContainerDied","Data":"77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7"} Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.181065 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238015 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-log-httpd\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238118 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-config-data\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238279 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5scxc\" (UniqueName: \"kubernetes.io/projected/302bf249-51e9-4271-915a-71bbc20c6d4e-kube-api-access-5scxc\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238385 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-combined-ca-bundle\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238431 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-sg-core-conf-yaml\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238627 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-scripts\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238676 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-ceilometer-tls-certs\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238707 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-run-httpd\") pod \"302bf249-51e9-4271-915a-71bbc20c6d4e\" (UID: \"302bf249-51e9-4271-915a-71bbc20c6d4e\") " Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.238774 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.239285 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.239586 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.239604 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/302bf249-51e9-4271-915a-71bbc20c6d4e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.244322 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302bf249-51e9-4271-915a-71bbc20c6d4e-kube-api-access-5scxc" (OuterVolumeSpecName: "kube-api-access-5scxc") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "kube-api-access-5scxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.263679 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-scripts" (OuterVolumeSpecName: "scripts") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.273866 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.305372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.342002 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.342035 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.342049 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.342063 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5scxc\" (UniqueName: \"kubernetes.io/projected/302bf249-51e9-4271-915a-71bbc20c6d4e-kube-api-access-5scxc\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.350858 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-config-data" (OuterVolumeSpecName: "config-data") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.355719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "302bf249-51e9-4271-915a-71bbc20c6d4e" (UID: "302bf249-51e9-4271-915a-71bbc20c6d4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.443802 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.443838 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302bf249-51e9-4271-915a-71bbc20c6d4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.678092 4811 generic.go:334] "Generic (PLEG): container finished" podID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerID="514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc" exitCode=0 Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.678137 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerDied","Data":"514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc"} Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.678165 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"302bf249-51e9-4271-915a-71bbc20c6d4e","Type":"ContainerDied","Data":"92deb0921ae0daf1af99f56184834fa9df9d50081d2554fbec6cbd9fedf8b505"} Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.678183 4811 scope.go:117] "RemoveContainer" containerID="10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.678340 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.726108 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.736950 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.747861 4811 scope.go:117] "RemoveContainer" containerID="c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757153 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.757612 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-central-agent" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757629 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-central-agent" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.757656 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="sg-core" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757662 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="sg-core" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.757672 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-notification-agent" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757678 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-notification-agent" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.757699 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="proxy-httpd" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757705 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="proxy-httpd" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757895 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-central-agent" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757914 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="ceilometer-notification-agent" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757930 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="proxy-httpd" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.757942 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" containerName="sg-core" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.773369 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.773523 4811 scope.go:117] "RemoveContainer" containerID="514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.774142 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.776159 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.776522 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.780725 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.809754 4811 scope.go:117] "RemoveContainer" containerID="c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.849935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850052 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-log-httpd\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-scripts\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850120 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-run-httpd\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850135 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-config-data\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850175 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850217 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzf99\" (UniqueName: \"kubernetes.io/projected/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-kube-api-access-pzf99\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.850276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.853218 4811 scope.go:117] "RemoveContainer" containerID="10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.853619 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813\": container with ID starting with 10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813 not found: ID does not exist" containerID="10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.853661 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813"} err="failed to get container status \"10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813\": rpc error: code = NotFound desc = could not find container \"10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813\": container with ID starting with 10081026a8dec01a6d4f5ceb9efa6d4b203be264366f2d5775617f70b286c813 not found: ID does not exist" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.853690 4811 scope.go:117] "RemoveContainer" containerID="c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.853992 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2\": container with ID starting with c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2 not found: ID does not exist" containerID="c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.854036 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2"} err="failed to get container status \"c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2\": rpc error: code = NotFound desc = could not find container \"c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2\": container with ID starting with c327c1490dc76f3d91e2ec2060e76a7d5b8855e740caaf838fc236cab44158e2 not found: ID does not exist" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.854066 4811 scope.go:117] "RemoveContainer" containerID="514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.854544 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc\": container with ID starting with 514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc not found: ID does not exist" containerID="514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.854573 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc"} err="failed to get container status \"514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc\": rpc error: code = NotFound desc = could not find container \"514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc\": container with ID starting with 514c8d940c0cbf782ab7d006ca294d2dae1e68dae5f769bba5f98710a5dbc4fc not found: ID does not exist" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.854595 4811 scope.go:117] "RemoveContainer" containerID="c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4" Feb 16 21:16:07 crc kubenswrapper[4811]: E0216 21:16:07.854828 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4\": container with ID starting with c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4 not found: ID does not exist" containerID="c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.854852 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4"} err="failed to get container status \"c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4\": rpc error: code = NotFound desc = could not find container \"c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4\": container with ID starting with c029800e08dc22044489336d3f621fb87384a0d1758d471f75018bf338d702d4 not found: ID does not exist" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.951871 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-config-data\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.951915 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-run-httpd\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.951952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.951981 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzf99\" (UniqueName: \"kubernetes.io/projected/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-kube-api-access-pzf99\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.952039 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.952109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.952167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-log-httpd\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.952208 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-scripts\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.952556 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-run-httpd\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.952603 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-log-httpd\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.955651 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.956123 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-scripts\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.956589 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-config-data\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.956879 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.957290 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:07 crc kubenswrapper[4811]: I0216 21:16:07.967646 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzf99\" (UniqueName: \"kubernetes.io/projected/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-kube-api-access-pzf99\") pod \"ceilometer-0\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " pod="openstack/ceilometer-0" Feb 16 21:16:08 crc kubenswrapper[4811]: I0216 21:16:08.093533 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:08 crc kubenswrapper[4811]: I0216 21:16:08.107483 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:08 crc kubenswrapper[4811]: I0216 21:16:08.569849 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:08 crc kubenswrapper[4811]: W0216 21:16:08.570620 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2a3846_c9b9_44df_a09e_2411fbc0d7c6.slice/crio-bcd2135648de6d66a91a4223870246af974efb0a25b13d218d0fc47e0dc727a7 WatchSource:0}: Error finding container bcd2135648de6d66a91a4223870246af974efb0a25b13d218d0fc47e0dc727a7: Status 404 returned error can't find the container with id bcd2135648de6d66a91a4223870246af974efb0a25b13d218d0fc47e0dc727a7 Feb 16 21:16:08 crc kubenswrapper[4811]: I0216 21:16:08.689603 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerStarted","Data":"bcd2135648de6d66a91a4223870246af974efb0a25b13d218d0fc47e0dc727a7"} Feb 16 21:16:08 crc kubenswrapper[4811]: I0216 21:16:08.716397 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302bf249-51e9-4271-915a-71bbc20c6d4e" path="/var/lib/kubelet/pods/302bf249-51e9-4271-915a-71bbc20c6d4e/volumes" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.309325 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.383377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-combined-ca-bundle\") pod \"e2be791e-5e97-4def-86cb-06759aac69b1\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.383571 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2be791e-5e97-4def-86cb-06759aac69b1-logs\") pod \"e2be791e-5e97-4def-86cb-06759aac69b1\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.383628 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2xs\" (UniqueName: \"kubernetes.io/projected/e2be791e-5e97-4def-86cb-06759aac69b1-kube-api-access-6z2xs\") pod \"e2be791e-5e97-4def-86cb-06759aac69b1\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.383652 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-config-data\") pod \"e2be791e-5e97-4def-86cb-06759aac69b1\" (UID: \"e2be791e-5e97-4def-86cb-06759aac69b1\") " Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.384337 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2be791e-5e97-4def-86cb-06759aac69b1-logs" (OuterVolumeSpecName: "logs") pod "e2be791e-5e97-4def-86cb-06759aac69b1" (UID: "e2be791e-5e97-4def-86cb-06759aac69b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.389427 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2be791e-5e97-4def-86cb-06759aac69b1-kube-api-access-6z2xs" (OuterVolumeSpecName: "kube-api-access-6z2xs") pod "e2be791e-5e97-4def-86cb-06759aac69b1" (UID: "e2be791e-5e97-4def-86cb-06759aac69b1"). InnerVolumeSpecName "kube-api-access-6z2xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.483789 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-config-data" (OuterVolumeSpecName: "config-data") pod "e2be791e-5e97-4def-86cb-06759aac69b1" (UID: "e2be791e-5e97-4def-86cb-06759aac69b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.484664 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2be791e-5e97-4def-86cb-06759aac69b1" (UID: "e2be791e-5e97-4def-86cb-06759aac69b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.486532 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.486580 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2be791e-5e97-4def-86cb-06759aac69b1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.486601 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z2xs\" (UniqueName: \"kubernetes.io/projected/e2be791e-5e97-4def-86cb-06759aac69b1-kube-api-access-6z2xs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.486621 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be791e-5e97-4def-86cb-06759aac69b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.699860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerStarted","Data":"8cbfcfe86a7a76fea0fc7d081aa4c7c91ae72597cd786c0d7a453c3a69c15673"} Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.702678 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2be791e-5e97-4def-86cb-06759aac69b1" containerID="0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53" exitCode=0 Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.702816 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2be791e-5e97-4def-86cb-06759aac69b1","Type":"ContainerDied","Data":"0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53"} Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.702893 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2be791e-5e97-4def-86cb-06759aac69b1","Type":"ContainerDied","Data":"2b2175e7a07e873674985dede4960dd6dce3915c59ea0779ab6bc9523a9ae3d6"} Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.702961 4811 scope.go:117] "RemoveContainer" containerID="0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.703135 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.734687 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.737024 4811 scope.go:117] "RemoveContainer" containerID="77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.744865 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.767906 4811 scope.go:117] "RemoveContainer" containerID="0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53" Feb 16 21:16:09 crc kubenswrapper[4811]: E0216 21:16:09.768495 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53\": container with ID starting with 0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53 not found: ID does not exist" containerID="0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.768540 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53"} err="failed to get container status \"0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53\": rpc error: code = NotFound desc = could not find container \"0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53\": container with ID starting with 0cab32ec699e67b1cd6a06ca9668131dceb5c45eff89e8c9a41aab5f22a69e53 not found: ID does not exist" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.768567 4811 scope.go:117] "RemoveContainer" containerID="77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7" Feb 16 21:16:09 crc kubenswrapper[4811]: E0216 21:16:09.768935 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7\": container with ID starting with 77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7 not found: ID does not exist" containerID="77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.768994 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7"} err="failed to get container status \"77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7\": rpc error: code = NotFound desc = could not find container \"77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7\": container with ID starting with 77ed8986fde87b22cc06a46c91824c751b27220bc8d7192116fa832c937747c7 not found: ID does not exist" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.770232 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:09 crc kubenswrapper[4811]: E0216 21:16:09.770723 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-log" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.770745 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-log" Feb 16 21:16:09 crc kubenswrapper[4811]: E0216 21:16:09.770793 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-api" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.770803 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-api" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.771355 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-log" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.771421 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" containerName="nova-api-api" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.773043 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.775517 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.775594 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.775716 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.780552 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.893668 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6bk\" (UniqueName: \"kubernetes.io/projected/d0a02f2a-b837-4e02-b40e-c48149ff8313-kube-api-access-fm6bk\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.893717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-public-tls-certs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.893843 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.894034 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.894073 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a02f2a-b837-4e02-b40e-c48149ff8313-logs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.894099 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-config-data\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.996482 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-public-tls-certs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.996617 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.996777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.996825 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a02f2a-b837-4e02-b40e-c48149ff8313-logs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.996860 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-config-data\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.997089 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6bk\" (UniqueName: \"kubernetes.io/projected/d0a02f2a-b837-4e02-b40e-c48149ff8313-kube-api-access-fm6bk\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:09 crc kubenswrapper[4811]: I0216 21:16:09.997567 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a02f2a-b837-4e02-b40e-c48149ff8313-logs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.000329 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.001743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-public-tls-certs\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.001895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-config-data\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.002791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.014361 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6bk\" (UniqueName: \"kubernetes.io/projected/d0a02f2a-b837-4e02-b40e-c48149ff8313-kube-api-access-fm6bk\") pod \"nova-api-0\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.093020 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.312668 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.332334 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.584506 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.717902 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2be791e-5e97-4def-86cb-06759aac69b1" path="/var/lib/kubelet/pods/e2be791e-5e97-4def-86cb-06759aac69b1/volumes" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.718780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0a02f2a-b837-4e02-b40e-c48149ff8313","Type":"ContainerStarted","Data":"8d01e7e5735a133f34f5f11159ce5f870d184e17e74b8c45e1974b7772bc3a82"} Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.724350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerStarted","Data":"f8f79c8add0ac44301bd9d7ed5702961f563d5bd455f891d194856e7296a50b2"} Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.742705 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.923278 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5kbjt"] Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.924965 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.926914 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.927332 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 21:16:10 crc kubenswrapper[4811]: I0216 21:16:10.944931 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5kbjt"] Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.021859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.021915 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.021964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-scripts\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.022032 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlqt\" (UniqueName: \"kubernetes.io/projected/ea3abe05-aab5-400d-b325-d94a0916d6a9-kube-api-access-zvlqt\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.123700 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvlqt\" (UniqueName: \"kubernetes.io/projected/ea3abe05-aab5-400d-b325-d94a0916d6a9-kube-api-access-zvlqt\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.124180 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.124271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.124345 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-scripts\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.127675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-scripts\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.129711 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.132772 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.143674 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvlqt\" (UniqueName: \"kubernetes.io/projected/ea3abe05-aab5-400d-b325-d94a0916d6a9-kube-api-access-zvlqt\") pod \"nova-cell1-cell-mapping-5kbjt\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.240208 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.738968 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerStarted","Data":"28958a329b071943169f3fafc472d1ffb9fddfdeb90b628a7c0618eef19bbc5b"} Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.745006 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0a02f2a-b837-4e02-b40e-c48149ff8313","Type":"ContainerStarted","Data":"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987"} Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.745073 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0a02f2a-b837-4e02-b40e-c48149ff8313","Type":"ContainerStarted","Data":"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a"} Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.772121 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5kbjt"] Feb 16 21:16:11 crc kubenswrapper[4811]: I0216 21:16:11.786245 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.786228223 podStartE2EDuration="2.786228223s" podCreationTimestamp="2026-02-16 21:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:11.784688924 +0000 UTC m=+1189.713984852" watchObservedRunningTime="2026-02-16 21:16:11.786228223 +0000 UTC m=+1189.715524161" Feb 16 21:16:12 crc kubenswrapper[4811]: E0216 21:16:12.716073 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.760874 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5kbjt" event={"ID":"ea3abe05-aab5-400d-b325-d94a0916d6a9","Type":"ContainerStarted","Data":"e955d5a60f1aa612e7d82ab4aa271199708308ddd656ae1ca80a406e41061c7a"} Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.760929 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5kbjt" event={"ID":"ea3abe05-aab5-400d-b325-d94a0916d6a9","Type":"ContainerStarted","Data":"7c0b1b57423bbe6a2b8f68ffe66708d00b7db1639ea15f1909263b483e2b442e"} Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.766254 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-central-agent" containerID="cri-o://8cbfcfe86a7a76fea0fc7d081aa4c7c91ae72597cd786c0d7a453c3a69c15673" gracePeriod=30 Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.766612 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="proxy-httpd" containerID="cri-o://873530ced1a7928397e98cddd3136c7a3f4e8bc3ed22465e74493eab7bc14632" gracePeriod=30 Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.766675 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="sg-core" containerID="cri-o://28958a329b071943169f3fafc472d1ffb9fddfdeb90b628a7c0618eef19bbc5b" gracePeriod=30 Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.766721 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-notification-agent" containerID="cri-o://f8f79c8add0ac44301bd9d7ed5702961f563d5bd455f891d194856e7296a50b2" gracePeriod=30 Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.766886 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerStarted","Data":"873530ced1a7928397e98cddd3136c7a3f4e8bc3ed22465e74493eab7bc14632"} Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.766934 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.808801 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.047665423 podStartE2EDuration="5.808778049s" podCreationTimestamp="2026-02-16 21:16:07 +0000 UTC" firstStartedPulling="2026-02-16 21:16:08.573580585 +0000 UTC m=+1186.502876543" lastFinishedPulling="2026-02-16 21:16:12.334693231 +0000 UTC m=+1190.263989169" observedRunningTime="2026-02-16 21:16:12.792219289 +0000 UTC m=+1190.721515247" watchObservedRunningTime="2026-02-16 21:16:12.808778049 +0000 UTC m=+1190.738073997" Feb 16 21:16:12 crc kubenswrapper[4811]: I0216 21:16:12.815798 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5kbjt" podStartSLOduration=2.815776712 podStartE2EDuration="2.815776712s" podCreationTimestamp="2026-02-16 21:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:12.806299647 +0000 UTC m=+1190.735595605" watchObservedRunningTime="2026-02-16 21:16:12.815776712 +0000 UTC m=+1190.745072670" Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.184374 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-kzzxx" Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.270433 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-s4zmp"] Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.270714 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="dnsmasq-dns" containerID="cri-o://07a87b22a6879d1a02509d9a533f78da88a76452f2b0d8ec7fd2d71532311a1c" gracePeriod=10 Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.798169 4811 generic.go:334] "Generic (PLEG): container finished" podID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerID="07a87b22a6879d1a02509d9a533f78da88a76452f2b0d8ec7fd2d71532311a1c" exitCode=0 Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.798496 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" event={"ID":"c9f7a117-80d9-4da7-a3e9-469976254cb9","Type":"ContainerDied","Data":"07a87b22a6879d1a02509d9a533f78da88a76452f2b0d8ec7fd2d71532311a1c"} Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.800695 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerID="28958a329b071943169f3fafc472d1ffb9fddfdeb90b628a7c0618eef19bbc5b" exitCode=2 Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.800718 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerID="f8f79c8add0ac44301bd9d7ed5702961f563d5bd455f891d194856e7296a50b2" exitCode=0 Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.801647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerDied","Data":"28958a329b071943169f3fafc472d1ffb9fddfdeb90b628a7c0618eef19bbc5b"} Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.801675 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerDied","Data":"f8f79c8add0ac44301bd9d7ed5702961f563d5bd455f891d194856e7296a50b2"} Feb 16 21:16:13 crc kubenswrapper[4811]: I0216 21:16:13.977995 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.100876 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpx7w\" (UniqueName: \"kubernetes.io/projected/c9f7a117-80d9-4da7-a3e9-469976254cb9-kube-api-access-dpx7w\") pod \"c9f7a117-80d9-4da7-a3e9-469976254cb9\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.101086 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-swift-storage-0\") pod \"c9f7a117-80d9-4da7-a3e9-469976254cb9\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.101164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-nb\") pod \"c9f7a117-80d9-4da7-a3e9-469976254cb9\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.101409 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-svc\") pod \"c9f7a117-80d9-4da7-a3e9-469976254cb9\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.101493 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-config\") pod \"c9f7a117-80d9-4da7-a3e9-469976254cb9\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.101562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-sb\") pod \"c9f7a117-80d9-4da7-a3e9-469976254cb9\" (UID: \"c9f7a117-80d9-4da7-a3e9-469976254cb9\") " Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.108172 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f7a117-80d9-4da7-a3e9-469976254cb9-kube-api-access-dpx7w" (OuterVolumeSpecName: "kube-api-access-dpx7w") pod "c9f7a117-80d9-4da7-a3e9-469976254cb9" (UID: "c9f7a117-80d9-4da7-a3e9-469976254cb9"). InnerVolumeSpecName "kube-api-access-dpx7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.163539 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9f7a117-80d9-4da7-a3e9-469976254cb9" (UID: "c9f7a117-80d9-4da7-a3e9-469976254cb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.175685 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c9f7a117-80d9-4da7-a3e9-469976254cb9" (UID: "c9f7a117-80d9-4da7-a3e9-469976254cb9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.175842 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-config" (OuterVolumeSpecName: "config") pod "c9f7a117-80d9-4da7-a3e9-469976254cb9" (UID: "c9f7a117-80d9-4da7-a3e9-469976254cb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.176129 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9f7a117-80d9-4da7-a3e9-469976254cb9" (UID: "c9f7a117-80d9-4da7-a3e9-469976254cb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.201780 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9f7a117-80d9-4da7-a3e9-469976254cb9" (UID: "c9f7a117-80d9-4da7-a3e9-469976254cb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.207763 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.207884 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.207954 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.208022 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpx7w\" (UniqueName: \"kubernetes.io/projected/c9f7a117-80d9-4da7-a3e9-469976254cb9-kube-api-access-dpx7w\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.208080 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.208141 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9f7a117-80d9-4da7-a3e9-469976254cb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.810738 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" event={"ID":"c9f7a117-80d9-4da7-a3e9-469976254cb9","Type":"ContainerDied","Data":"2fdc603e15c5d997cef74281b893cb4c6b518e53e00fc3ee8be8532d79dfe3fd"} Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.810785 4811 scope.go:117] "RemoveContainer" containerID="07a87b22a6879d1a02509d9a533f78da88a76452f2b0d8ec7fd2d71532311a1c" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.810907 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.846381 4811 scope.go:117] "RemoveContainer" containerID="a449145a385dddb17233305c61d4f8f8de92f5515aba9d4cc00578a8ada77ce8" Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.851168 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-s4zmp"] Feb 16 21:16:14 crc kubenswrapper[4811]: I0216 21:16:14.876856 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-s4zmp"] Feb 16 21:16:16 crc kubenswrapper[4811]: I0216 21:16:16.716443 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" path="/var/lib/kubelet/pods/c9f7a117-80d9-4da7-a3e9-469976254cb9/volumes" Feb 16 21:16:16 crc kubenswrapper[4811]: I0216 21:16:16.838368 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerID="8cbfcfe86a7a76fea0fc7d081aa4c7c91ae72597cd786c0d7a453c3a69c15673" exitCode=0 Feb 16 21:16:16 crc kubenswrapper[4811]: I0216 21:16:16.838421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerDied","Data":"8cbfcfe86a7a76fea0fc7d081aa4c7c91ae72597cd786c0d7a453c3a69c15673"} Feb 16 21:16:17 crc kubenswrapper[4811]: I0216 21:16:17.851575 4811 generic.go:334] "Generic (PLEG): container finished" podID="ea3abe05-aab5-400d-b325-d94a0916d6a9" containerID="e955d5a60f1aa612e7d82ab4aa271199708308ddd656ae1ca80a406e41061c7a" exitCode=0 Feb 16 21:16:17 crc kubenswrapper[4811]: I0216 21:16:17.851650 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5kbjt" event={"ID":"ea3abe05-aab5-400d-b325-d94a0916d6a9","Type":"ContainerDied","Data":"e955d5a60f1aa612e7d82ab4aa271199708308ddd656ae1ca80a406e41061c7a"} Feb 16 21:16:18 crc kubenswrapper[4811]: I0216 21:16:18.620290 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-s4zmp" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.207:5353: i/o timeout" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.320731 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.424953 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-combined-ca-bundle\") pod \"ea3abe05-aab5-400d-b325-d94a0916d6a9\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.425079 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-scripts\") pod \"ea3abe05-aab5-400d-b325-d94a0916d6a9\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.425185 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data\") pod \"ea3abe05-aab5-400d-b325-d94a0916d6a9\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.425237 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvlqt\" (UniqueName: \"kubernetes.io/projected/ea3abe05-aab5-400d-b325-d94a0916d6a9-kube-api-access-zvlqt\") pod \"ea3abe05-aab5-400d-b325-d94a0916d6a9\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.431635 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3abe05-aab5-400d-b325-d94a0916d6a9-kube-api-access-zvlqt" (OuterVolumeSpecName: "kube-api-access-zvlqt") pod "ea3abe05-aab5-400d-b325-d94a0916d6a9" (UID: "ea3abe05-aab5-400d-b325-d94a0916d6a9"). InnerVolumeSpecName "kube-api-access-zvlqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.437317 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-scripts" (OuterVolumeSpecName: "scripts") pod "ea3abe05-aab5-400d-b325-d94a0916d6a9" (UID: "ea3abe05-aab5-400d-b325-d94a0916d6a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:19 crc kubenswrapper[4811]: E0216 21:16:19.459439 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data podName:ea3abe05-aab5-400d-b325-d94a0916d6a9 nodeName:}" failed. No retries permitted until 2026-02-16 21:16:19.959219704 +0000 UTC m=+1197.888515642 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data") pod "ea3abe05-aab5-400d-b325-d94a0916d6a9" (UID: "ea3abe05-aab5-400d-b325-d94a0916d6a9") : error deleting /var/lib/kubelet/pods/ea3abe05-aab5-400d-b325-d94a0916d6a9/volume-subpaths: remove /var/lib/kubelet/pods/ea3abe05-aab5-400d-b325-d94a0916d6a9/volume-subpaths: no such file or directory Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.463146 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea3abe05-aab5-400d-b325-d94a0916d6a9" (UID: "ea3abe05-aab5-400d-b325-d94a0916d6a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.527960 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvlqt\" (UniqueName: \"kubernetes.io/projected/ea3abe05-aab5-400d-b325-d94a0916d6a9-kube-api-access-zvlqt\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.527984 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.527995 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.873041 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5kbjt" event={"ID":"ea3abe05-aab5-400d-b325-d94a0916d6a9","Type":"ContainerDied","Data":"7c0b1b57423bbe6a2b8f68ffe66708d00b7db1639ea15f1909263b483e2b442e"} Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.873488 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c0b1b57423bbe6a2b8f68ffe66708d00b7db1639ea15f1909263b483e2b442e" Feb 16 21:16:19 crc kubenswrapper[4811]: I0216 21:16:19.873460 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5kbjt" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.039119 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data\") pod \"ea3abe05-aab5-400d-b325-d94a0916d6a9\" (UID: \"ea3abe05-aab5-400d-b325-d94a0916d6a9\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.050390 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data" (OuterVolumeSpecName: "config-data") pod "ea3abe05-aab5-400d-b325-d94a0916d6a9" (UID: "ea3abe05-aab5-400d-b325-d94a0916d6a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.074320 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.074674 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-log" containerID="cri-o://e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a" gracePeriod=30 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.075339 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-api" containerID="cri-o://d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987" gracePeriod=30 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.104094 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.104372 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9fa14015-5aeb-49dd-85d6-772ab019e88f" containerName="nova-scheduler-scheduler" containerID="cri-o://1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293" gracePeriod=30 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.129480 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.129879 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-log" containerID="cri-o://c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3" gracePeriod=30 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.130092 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-metadata" containerID="cri-o://e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c" gracePeriod=30 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.142412 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3abe05-aab5-400d-b325-d94a0916d6a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.706376 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.756841 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a02f2a-b837-4e02-b40e-c48149ff8313-logs\") pod \"d0a02f2a-b837-4e02-b40e-c48149ff8313\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.756912 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-internal-tls-certs\") pod \"d0a02f2a-b837-4e02-b40e-c48149ff8313\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.757102 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6bk\" (UniqueName: \"kubernetes.io/projected/d0a02f2a-b837-4e02-b40e-c48149ff8313-kube-api-access-fm6bk\") pod \"d0a02f2a-b837-4e02-b40e-c48149ff8313\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.757207 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-combined-ca-bundle\") pod \"d0a02f2a-b837-4e02-b40e-c48149ff8313\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.757247 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-config-data\") pod \"d0a02f2a-b837-4e02-b40e-c48149ff8313\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.757284 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-public-tls-certs\") pod \"d0a02f2a-b837-4e02-b40e-c48149ff8313\" (UID: \"d0a02f2a-b837-4e02-b40e-c48149ff8313\") " Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.757679 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a02f2a-b837-4e02-b40e-c48149ff8313-logs" (OuterVolumeSpecName: "logs") pod "d0a02f2a-b837-4e02-b40e-c48149ff8313" (UID: "d0a02f2a-b837-4e02-b40e-c48149ff8313"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.757870 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a02f2a-b837-4e02-b40e-c48149ff8313-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.765248 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a02f2a-b837-4e02-b40e-c48149ff8313-kube-api-access-fm6bk" (OuterVolumeSpecName: "kube-api-access-fm6bk") pod "d0a02f2a-b837-4e02-b40e-c48149ff8313" (UID: "d0a02f2a-b837-4e02-b40e-c48149ff8313"). InnerVolumeSpecName "kube-api-access-fm6bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.795110 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0a02f2a-b837-4e02-b40e-c48149ff8313" (UID: "d0a02f2a-b837-4e02-b40e-c48149ff8313"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.795719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-config-data" (OuterVolumeSpecName: "config-data") pod "d0a02f2a-b837-4e02-b40e-c48149ff8313" (UID: "d0a02f2a-b837-4e02-b40e-c48149ff8313"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.826312 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d0a02f2a-b837-4e02-b40e-c48149ff8313" (UID: "d0a02f2a-b837-4e02-b40e-c48149ff8313"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.844571 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d0a02f2a-b837-4e02-b40e-c48149ff8313" (UID: "d0a02f2a-b837-4e02-b40e-c48149ff8313"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.859999 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6bk\" (UniqueName: \"kubernetes.io/projected/d0a02f2a-b837-4e02-b40e-c48149ff8313-kube-api-access-fm6bk\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.860038 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.860053 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.860067 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.860077 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a02f2a-b837-4e02-b40e-c48149ff8313-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891241 4811 generic.go:334] "Generic (PLEG): container finished" podID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerID="d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987" exitCode=0 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891280 4811 generic.go:334] "Generic (PLEG): container finished" podID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerID="e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a" exitCode=143 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891323 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0a02f2a-b837-4e02-b40e-c48149ff8313","Type":"ContainerDied","Data":"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987"} Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0a02f2a-b837-4e02-b40e-c48149ff8313","Type":"ContainerDied","Data":"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a"} Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0a02f2a-b837-4e02-b40e-c48149ff8313","Type":"ContainerDied","Data":"8d01e7e5735a133f34f5f11159ce5f870d184e17e74b8c45e1974b7772bc3a82"} Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891382 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.891393 4811 scope.go:117] "RemoveContainer" containerID="d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.894696 4811 generic.go:334] "Generic (PLEG): container finished" podID="d299bddd-235d-4382-8590-1103bf10fbd7" containerID="c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3" exitCode=143 Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.894772 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d299bddd-235d-4382-8590-1103bf10fbd7","Type":"ContainerDied","Data":"c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3"} Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.941005 4811 scope.go:117] "RemoveContainer" containerID="e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.948591 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.963732 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.981384 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.981934 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-log" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.981953 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-log" Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.981987 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="init" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.981994 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="init" Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.982007 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="dnsmasq-dns" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982013 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="dnsmasq-dns" Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.982021 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3abe05-aab5-400d-b325-d94a0916d6a9" containerName="nova-manage" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982028 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3abe05-aab5-400d-b325-d94a0916d6a9" containerName="nova-manage" Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.982040 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-api" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982047 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-api" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982254 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f7a117-80d9-4da7-a3e9-469976254cb9" containerName="dnsmasq-dns" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982277 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3abe05-aab5-400d-b325-d94a0916d6a9" containerName="nova-manage" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982288 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-log" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.982306 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" containerName="nova-api-api" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.983430 4811 scope.go:117] "RemoveContainer" containerID="d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.983517 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.984107 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987\": container with ID starting with d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987 not found: ID does not exist" containerID="d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.984142 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987"} err="failed to get container status \"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987\": rpc error: code = NotFound desc = could not find container \"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987\": container with ID starting with d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987 not found: ID does not exist" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.984166 4811 scope.go:117] "RemoveContainer" containerID="e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a" Feb 16 21:16:20 crc kubenswrapper[4811]: E0216 21:16:20.984427 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a\": container with ID starting with e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a not found: ID does not exist" containerID="e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.984457 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a"} err="failed to get container status \"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a\": rpc error: code = NotFound desc = could not find container \"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a\": container with ID starting with e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a not found: ID does not exist" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.984481 4811 scope.go:117] "RemoveContainer" containerID="d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.984743 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987"} err="failed to get container status \"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987\": rpc error: code = NotFound desc = could not find container \"d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987\": container with ID starting with d31b5bdcb714a9b8b747f9eff4e42751d5f54874ab99b7c3c8cbbfd074394987 not found: ID does not exist" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.984772 4811 scope.go:117] "RemoveContainer" containerID="e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.985277 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a"} err="failed to get container status \"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a\": rpc error: code = NotFound desc = could not find container \"e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a\": container with ID starting with e117fddb6618c8adb6066a0a756dd736e6888258efd71976f84f3bb53c08304a not found: ID does not exist" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.985847 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.986130 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.992736 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 21:16:20 crc kubenswrapper[4811]: I0216 21:16:20.994912 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.064888 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.064970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-config-data\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.065079 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6k9g\" (UniqueName: \"kubernetes.io/projected/e5641083-7376-4bd9-93fc-d4c78fdf086c-kube-api-access-f6k9g\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.065136 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5641083-7376-4bd9-93fc-d4c78fdf086c-logs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.065196 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.065307 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: E0216 21:16:21.065876 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 21:16:21 crc kubenswrapper[4811]: E0216 21:16:21.068123 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 21:16:21 crc kubenswrapper[4811]: E0216 21:16:21.070112 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 21:16:21 crc kubenswrapper[4811]: E0216 21:16:21.070173 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9fa14015-5aeb-49dd-85d6-772ab019e88f" containerName="nova-scheduler-scheduler" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.167114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.167282 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.167333 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.167394 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-config-data\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.167485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6k9g\" (UniqueName: \"kubernetes.io/projected/e5641083-7376-4bd9-93fc-d4c78fdf086c-kube-api-access-f6k9g\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.167554 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5641083-7376-4bd9-93fc-d4c78fdf086c-logs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.168087 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5641083-7376-4bd9-93fc-d4c78fdf086c-logs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.170733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.171157 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.171471 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-config-data\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.172894 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5641083-7376-4bd9-93fc-d4c78fdf086c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.188516 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6k9g\" (UniqueName: \"kubernetes.io/projected/e5641083-7376-4bd9-93fc-d4c78fdf086c-kube-api-access-f6k9g\") pod \"nova-api-0\" (UID: \"e5641083-7376-4bd9-93fc-d4c78fdf086c\") " pod="openstack/nova-api-0" Feb 16 21:16:21 crc kubenswrapper[4811]: I0216 21:16:21.306005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:16:22 crc kubenswrapper[4811]: I0216 21:16:22.035674 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:16:22 crc kubenswrapper[4811]: I0216 21:16:22.726956 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a02f2a-b837-4e02-b40e-c48149ff8313" path="/var/lib/kubelet/pods/d0a02f2a-b837-4e02-b40e-c48149ff8313/volumes" Feb 16 21:16:22 crc kubenswrapper[4811]: I0216 21:16:22.917737 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5641083-7376-4bd9-93fc-d4c78fdf086c","Type":"ContainerStarted","Data":"f932d0b121913f44f272ab643eec22746232098893df01c27a38e957c9714bbc"} Feb 16 21:16:22 crc kubenswrapper[4811]: I0216 21:16:22.917784 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5641083-7376-4bd9-93fc-d4c78fdf086c","Type":"ContainerStarted","Data":"a0d8c8daa91f6b3337b332472232ace81724961fce58f94c8b466299167c7979"} Feb 16 21:16:22 crc kubenswrapper[4811]: I0216 21:16:22.917794 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5641083-7376-4bd9-93fc-d4c78fdf086c","Type":"ContainerStarted","Data":"79ab1f6a827c2b12837a2d2c5abca9f84b07ae012bbfed974cc5b325baeb3570"} Feb 16 21:16:22 crc kubenswrapper[4811]: I0216 21:16:22.946626 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.946605168 podStartE2EDuration="2.946605168s" podCreationTimestamp="2026-02-16 21:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:22.936750554 +0000 UTC m=+1200.866046502" watchObservedRunningTime="2026-02-16 21:16:22.946605168 +0000 UTC m=+1200.875901116" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.268655 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:50380->10.217.0.212:8775: read: connection reset by peer" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.268680 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:50382->10.217.0.212:8775: read: connection reset by peer" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.758287 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.939499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-nova-metadata-tls-certs\") pod \"d299bddd-235d-4382-8590-1103bf10fbd7\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.939824 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d299bddd-235d-4382-8590-1103bf10fbd7-logs\") pod \"d299bddd-235d-4382-8590-1103bf10fbd7\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.939869 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-config-data\") pod \"d299bddd-235d-4382-8590-1103bf10fbd7\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.939908 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b6rr\" (UniqueName: \"kubernetes.io/projected/d299bddd-235d-4382-8590-1103bf10fbd7-kube-api-access-2b6rr\") pod \"d299bddd-235d-4382-8590-1103bf10fbd7\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.939984 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-combined-ca-bundle\") pod \"d299bddd-235d-4382-8590-1103bf10fbd7\" (UID: \"d299bddd-235d-4382-8590-1103bf10fbd7\") " Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.941375 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d299bddd-235d-4382-8590-1103bf10fbd7-logs" (OuterVolumeSpecName: "logs") pod "d299bddd-235d-4382-8590-1103bf10fbd7" (UID: "d299bddd-235d-4382-8590-1103bf10fbd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.945477 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d299bddd-235d-4382-8590-1103bf10fbd7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.947881 4811 generic.go:334] "Generic (PLEG): container finished" podID="d299bddd-235d-4382-8590-1103bf10fbd7" containerID="e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c" exitCode=0 Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.948106 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d299bddd-235d-4382-8590-1103bf10fbd7","Type":"ContainerDied","Data":"e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c"} Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.948209 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d299bddd-235d-4382-8590-1103bf10fbd7","Type":"ContainerDied","Data":"17d73598829d8e5668c3af585a0fcb31850822983bcd0309fb3553b36ecf3b8b"} Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.948309 4811 scope.go:117] "RemoveContainer" containerID="e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.948369 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.950328 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d299bddd-235d-4382-8590-1103bf10fbd7-kube-api-access-2b6rr" (OuterVolumeSpecName: "kube-api-access-2b6rr") pod "d299bddd-235d-4382-8590-1103bf10fbd7" (UID: "d299bddd-235d-4382-8590-1103bf10fbd7"). InnerVolumeSpecName "kube-api-access-2b6rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.983860 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-config-data" (OuterVolumeSpecName: "config-data") pod "d299bddd-235d-4382-8590-1103bf10fbd7" (UID: "d299bddd-235d-4382-8590-1103bf10fbd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:23 crc kubenswrapper[4811]: I0216 21:16:23.998039 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d299bddd-235d-4382-8590-1103bf10fbd7" (UID: "d299bddd-235d-4382-8590-1103bf10fbd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.010855 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d299bddd-235d-4382-8590-1103bf10fbd7" (UID: "d299bddd-235d-4382-8590-1103bf10fbd7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.047621 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.047660 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b6rr\" (UniqueName: \"kubernetes.io/projected/d299bddd-235d-4382-8590-1103bf10fbd7-kube-api-access-2b6rr\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.047675 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.047687 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d299bddd-235d-4382-8590-1103bf10fbd7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.077954 4811 scope.go:117] "RemoveContainer" containerID="c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.096847 4811 scope.go:117] "RemoveContainer" containerID="e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.097371 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c\": container with ID starting with e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c not found: ID does not exist" containerID="e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.097408 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c"} err="failed to get container status \"e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c\": rpc error: code = NotFound desc = could not find container \"e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c\": container with ID starting with e11328c8686a6e7f6ee5170af8013a05d03a5bbd7491b491c7701ad4be77518c not found: ID does not exist" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.097433 4811 scope.go:117] "RemoveContainer" containerID="c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.097830 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3\": container with ID starting with c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3 not found: ID does not exist" containerID="c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.097858 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3"} err="failed to get container status \"c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3\": rpc error: code = NotFound desc = could not find container \"c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3\": container with ID starting with c42f9e299a6305717fcd2a2bbe9358715eb24c0972aea96f7b14542639bb3ed3 not found: ID does not exist" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.281412 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.299790 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.310487 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.311695 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-log" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.311724 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-log" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.311770 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-metadata" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.311779 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-metadata" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.311997 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-log" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.312024 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" containerName="nova-metadata-metadata" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.313430 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.317257 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.317930 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.334387 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.458106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-config-data\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.458167 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.458218 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c06d20a-86c8-4916-b315-971dab244fd9-logs\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.458322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.458430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwxd\" (UniqueName: \"kubernetes.io/projected/5c06d20a-86c8-4916-b315-971dab244fd9-kube-api-access-vpwxd\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.560412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwxd\" (UniqueName: \"kubernetes.io/projected/5c06d20a-86c8-4916-b315-971dab244fd9-kube-api-access-vpwxd\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.560691 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-config-data\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.560751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.560832 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c06d20a-86c8-4916-b315-971dab244fd9-logs\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.561463 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.561498 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c06d20a-86c8-4916-b315-971dab244fd9-logs\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.565776 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-config-data\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.575280 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.575797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c06d20a-86c8-4916-b315-971dab244fd9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.592099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwxd\" (UniqueName: \"kubernetes.io/projected/5c06d20a-86c8-4916-b315-971dab244fd9-kube-api-access-vpwxd\") pod \"nova-metadata-0\" (UID: \"5c06d20a-86c8-4916-b315-971dab244fd9\") " pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.632821 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.720986 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d299bddd-235d-4382-8590-1103bf10fbd7" path="/var/lib/kubelet/pods/d299bddd-235d-4382-8590-1103bf10fbd7/volumes" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.819889 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.820245 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.820400 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:16:24 crc kubenswrapper[4811]: E0216 21:16:24.821510 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.995506 4811 generic.go:334] "Generic (PLEG): container finished" podID="9fa14015-5aeb-49dd-85d6-772ab019e88f" containerID="1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293" exitCode=0 Feb 16 21:16:24 crc kubenswrapper[4811]: I0216 21:16:24.995590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fa14015-5aeb-49dd-85d6-772ab019e88f","Type":"ContainerDied","Data":"1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293"} Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.110189 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.277677 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-config-data\") pod \"9fa14015-5aeb-49dd-85d6-772ab019e88f\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.278070 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6lx8\" (UniqueName: \"kubernetes.io/projected/9fa14015-5aeb-49dd-85d6-772ab019e88f-kube-api-access-r6lx8\") pod \"9fa14015-5aeb-49dd-85d6-772ab019e88f\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.278242 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-combined-ca-bundle\") pod \"9fa14015-5aeb-49dd-85d6-772ab019e88f\" (UID: \"9fa14015-5aeb-49dd-85d6-772ab019e88f\") " Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.283164 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa14015-5aeb-49dd-85d6-772ab019e88f-kube-api-access-r6lx8" (OuterVolumeSpecName: "kube-api-access-r6lx8") pod "9fa14015-5aeb-49dd-85d6-772ab019e88f" (UID: "9fa14015-5aeb-49dd-85d6-772ab019e88f"). InnerVolumeSpecName "kube-api-access-r6lx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.288135 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.344704 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fa14015-5aeb-49dd-85d6-772ab019e88f" (UID: "9fa14015-5aeb-49dd-85d6-772ab019e88f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.347477 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-config-data" (OuterVolumeSpecName: "config-data") pod "9fa14015-5aeb-49dd-85d6-772ab019e88f" (UID: "9fa14015-5aeb-49dd-85d6-772ab019e88f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.380559 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.380591 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6lx8\" (UniqueName: \"kubernetes.io/projected/9fa14015-5aeb-49dd-85d6-772ab019e88f-kube-api-access-r6lx8\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:25 crc kubenswrapper[4811]: I0216 21:16:25.380602 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa14015-5aeb-49dd-85d6-772ab019e88f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.023887 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.029909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fa14015-5aeb-49dd-85d6-772ab019e88f","Type":"ContainerDied","Data":"7a68bce65c339ae676c99ed415a6f3d60b4dc3066cf8ba36566fd47f37e459eb"} Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.029948 4811 scope.go:117] "RemoveContainer" containerID="1e21e7ce3d1b5e71eec007892ec95aaf4f16328755f04fca19777300fafa0293" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.031743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c06d20a-86c8-4916-b315-971dab244fd9","Type":"ContainerStarted","Data":"3d81e2972eb6be3efb74570620808e8500d5a620b0f99e23afd0b72ca3b06c2f"} Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.031766 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c06d20a-86c8-4916-b315-971dab244fd9","Type":"ContainerStarted","Data":"6557b93a1e7ce61ddc8344fa2ab86512e6a3c976ea80fc503b8515f5cec91a37"} Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.031775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c06d20a-86c8-4916-b315-971dab244fd9","Type":"ContainerStarted","Data":"9d38337e3ce938f95d18b283e014f2bd892c30567f0f3397008d234403223fce"} Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.070704 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.070681403 podStartE2EDuration="2.070681403s" podCreationTimestamp="2026-02-16 21:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:26.051851627 +0000 UTC m=+1203.981147585" watchObservedRunningTime="2026-02-16 21:16:26.070681403 +0000 UTC m=+1203.999977351" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.098613 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.115519 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.130675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:16:26 crc kubenswrapper[4811]: E0216 21:16:26.131257 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa14015-5aeb-49dd-85d6-772ab019e88f" containerName="nova-scheduler-scheduler" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.131277 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa14015-5aeb-49dd-85d6-772ab019e88f" containerName="nova-scheduler-scheduler" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.131462 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa14015-5aeb-49dd-85d6-772ab019e88f" containerName="nova-scheduler-scheduler" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.132396 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.135520 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.146928 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.299231 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4fkw\" (UniqueName: \"kubernetes.io/projected/22a3ecca-decd-46bd-ae63-25f0c42fba02-kube-api-access-m4fkw\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.299350 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a3ecca-decd-46bd-ae63-25f0c42fba02-config-data\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.299406 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a3ecca-decd-46bd-ae63-25f0c42fba02-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.401878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a3ecca-decd-46bd-ae63-25f0c42fba02-config-data\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.401964 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a3ecca-decd-46bd-ae63-25f0c42fba02-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.402069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4fkw\" (UniqueName: \"kubernetes.io/projected/22a3ecca-decd-46bd-ae63-25f0c42fba02-kube-api-access-m4fkw\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.407023 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a3ecca-decd-46bd-ae63-25f0c42fba02-config-data\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.409008 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a3ecca-decd-46bd-ae63-25f0c42fba02-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.424029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4fkw\" (UniqueName: \"kubernetes.io/projected/22a3ecca-decd-46bd-ae63-25f0c42fba02-kube-api-access-m4fkw\") pod \"nova-scheduler-0\" (UID: \"22a3ecca-decd-46bd-ae63-25f0c42fba02\") " pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.453768 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.738501 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa14015-5aeb-49dd-85d6-772ab019e88f" path="/var/lib/kubelet/pods/9fa14015-5aeb-49dd-85d6-772ab019e88f/volumes" Feb 16 21:16:26 crc kubenswrapper[4811]: I0216 21:16:26.925838 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:16:27 crc kubenswrapper[4811]: I0216 21:16:27.043993 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22a3ecca-decd-46bd-ae63-25f0c42fba02","Type":"ContainerStarted","Data":"149d8b32e6d9adeab3053315c71789ed546a95e001613b9f45e4a7543cf7a75e"} Feb 16 21:16:28 crc kubenswrapper[4811]: I0216 21:16:28.057700 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22a3ecca-decd-46bd-ae63-25f0c42fba02","Type":"ContainerStarted","Data":"d91b6cd7fff3bde974cb4260844dabe67bac0a866b9acb5f18013285121d0724"} Feb 16 21:16:28 crc kubenswrapper[4811]: I0216 21:16:28.078403 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.07838109 podStartE2EDuration="2.07838109s" podCreationTimestamp="2026-02-16 21:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:28.076007582 +0000 UTC m=+1206.005303530" watchObservedRunningTime="2026-02-16 21:16:28.07838109 +0000 UTC m=+1206.007677048" Feb 16 21:16:29 crc kubenswrapper[4811]: I0216 21:16:29.633573 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:16:29 crc kubenswrapper[4811]: I0216 21:16:29.633998 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:16:31 crc kubenswrapper[4811]: I0216 21:16:31.307288 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:16:31 crc kubenswrapper[4811]: I0216 21:16:31.307396 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:16:31 crc kubenswrapper[4811]: I0216 21:16:31.454791 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:16:32 crc kubenswrapper[4811]: I0216 21:16:32.320489 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5641083-7376-4bd9-93fc-d4c78fdf086c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:16:32 crc kubenswrapper[4811]: I0216 21:16:32.320778 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5641083-7376-4bd9-93fc-d4c78fdf086c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:16:34 crc kubenswrapper[4811]: I0216 21:16:34.634125 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:16:34 crc kubenswrapper[4811]: I0216 21:16:34.634266 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:16:35 crc kubenswrapper[4811]: I0216 21:16:35.652791 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c06d20a-86c8-4916-b315-971dab244fd9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:16:35 crc kubenswrapper[4811]: I0216 21:16:35.653044 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c06d20a-86c8-4916-b315-971dab244fd9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:16:36 crc kubenswrapper[4811]: I0216 21:16:36.454540 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:16:36 crc kubenswrapper[4811]: I0216 21:16:36.488994 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:16:37 crc kubenswrapper[4811]: I0216 21:16:37.230985 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:16:38 crc kubenswrapper[4811]: I0216 21:16:38.104628 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:16:38 crc kubenswrapper[4811]: E0216 21:16:38.706633 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:16:41 crc kubenswrapper[4811]: I0216 21:16:41.316600 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:16:41 crc kubenswrapper[4811]: I0216 21:16:41.317629 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:16:41 crc kubenswrapper[4811]: I0216 21:16:41.322678 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:16:41 crc kubenswrapper[4811]: I0216 21:16:41.328476 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:16:41 crc kubenswrapper[4811]: E0216 21:16:41.662140 4811 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4012e715b08219781c95151cd22c947045a0d6e6017fe07f1d49dc5062e96c06/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4012e715b08219781c95151cd22c947045a0d6e6017fe07f1d49dc5062e96c06/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_nova-scheduler-0_9fa14015-5aeb-49dd-85d6-772ab019e88f/nova-scheduler-scheduler/0.log" to get inode usage: stat /var/log/pods/openstack_nova-scheduler-0_9fa14015-5aeb-49dd-85d6-772ab019e88f/nova-scheduler-scheduler/0.log: no such file or directory Feb 16 21:16:42 crc kubenswrapper[4811]: I0216 21:16:42.243247 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:16:42 crc kubenswrapper[4811]: I0216 21:16:42.255405 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.254289 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerID="873530ced1a7928397e98cddd3136c7a3f4e8bc3ed22465e74493eab7bc14632" exitCode=137 Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.254323 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerDied","Data":"873530ced1a7928397e98cddd3136c7a3f4e8bc3ed22465e74493eab7bc14632"} Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.408714 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.493822 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-run-httpd\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494072 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzf99\" (UniqueName: \"kubernetes.io/projected/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-kube-api-access-pzf99\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494295 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-config-data\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494405 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-log-httpd\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494455 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494547 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-sg-core-conf-yaml\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494654 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-ceilometer-tls-certs\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494750 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-scripts\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.494845 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-combined-ca-bundle\") pod \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\" (UID: \"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6\") " Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.495264 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.500057 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-kube-api-access-pzf99" (OuterVolumeSpecName: "kube-api-access-pzf99") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "kube-api-access-pzf99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.501240 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.515431 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-scripts" (OuterVolumeSpecName: "scripts") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.560847 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.570611 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.598584 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.598619 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.598634 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.598647 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.598659 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzf99\" (UniqueName: \"kubernetes.io/projected/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-kube-api-access-pzf99\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.623307 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.647471 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-config-data" (OuterVolumeSpecName: "config-data") pod "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" (UID: "6c2a3846-c9b9-44df-a09e-2411fbc0d7c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.700758 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:43 crc kubenswrapper[4811]: I0216 21:16:43.700793 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.271933 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2a3846-c9b9-44df-a09e-2411fbc0d7c6","Type":"ContainerDied","Data":"bcd2135648de6d66a91a4223870246af974efb0a25b13d218d0fc47e0dc727a7"} Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.272019 4811 scope.go:117] "RemoveContainer" containerID="873530ced1a7928397e98cddd3136c7a3f4e8bc3ed22465e74493eab7bc14632" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.271956 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.319854 4811 scope.go:117] "RemoveContainer" containerID="28958a329b071943169f3fafc472d1ffb9fddfdeb90b628a7c0618eef19bbc5b" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.339350 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.346585 4811 scope.go:117] "RemoveContainer" containerID="f8f79c8add0ac44301bd9d7ed5702961f563d5bd455f891d194856e7296a50b2" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.356133 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.369843 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:44 crc kubenswrapper[4811]: E0216 21:16:44.370659 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="proxy-httpd" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.370726 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="proxy-httpd" Feb 16 21:16:44 crc kubenswrapper[4811]: E0216 21:16:44.370775 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-central-agent" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.370796 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-central-agent" Feb 16 21:16:44 crc kubenswrapper[4811]: E0216 21:16:44.370833 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-notification-agent" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.370853 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-notification-agent" Feb 16 21:16:44 crc kubenswrapper[4811]: E0216 21:16:44.370901 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="sg-core" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.370922 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="sg-core" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.371444 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="proxy-httpd" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.371502 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-notification-agent" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.371541 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="ceilometer-central-agent" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.371570 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" containerName="sg-core" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.375849 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.380476 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.396950 4811 scope.go:117] "RemoveContainer" containerID="8cbfcfe86a7a76fea0fc7d081aa4c7c91ae72597cd786c0d7a453c3a69c15673" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.399016 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.399335 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.399361 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.527946 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-scripts\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.528371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.528495 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-log-httpd\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.528607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-run-httpd\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.528712 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db2fp\" (UniqueName: \"kubernetes.io/projected/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-kube-api-access-db2fp\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.528806 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.528922 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.529091 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-config-data\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.631185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-config-data\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.631603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-scripts\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.631748 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.631876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-log-httpd\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.631987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-run-httpd\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.632078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db2fp\" (UniqueName: \"kubernetes.io/projected/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-kube-api-access-db2fp\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.632181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.632361 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.632717 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-run-httpd\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.632865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-log-httpd\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.636986 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.637056 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.639750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-scripts\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.642260 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.643366 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-config-data\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.649581 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.658099 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.658505 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.661022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db2fp\" (UniqueName: \"kubernetes.io/projected/f889b0d1-bc4c-4eeb-a4bf-789d313c1055-kube-api-access-db2fp\") pod \"ceilometer-0\" (UID: \"f889b0d1-bc4c-4eeb-a4bf-789d313c1055\") " pod="openstack/ceilometer-0" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.720183 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2a3846-c9b9-44df-a09e-2411fbc0d7c6" path="/var/lib/kubelet/pods/6c2a3846-c9b9-44df-a09e-2411fbc0d7c6/volumes" Feb 16 21:16:44 crc kubenswrapper[4811]: I0216 21:16:44.723645 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:16:45 crc kubenswrapper[4811]: I0216 21:16:45.218168 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:16:45 crc kubenswrapper[4811]: I0216 21:16:45.285837 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f889b0d1-bc4c-4eeb-a4bf-789d313c1055","Type":"ContainerStarted","Data":"ffa61635989f159f78ef833115c1945c52da35e26a7eeb1cfdc259d7ce87808b"} Feb 16 21:16:45 crc kubenswrapper[4811]: I0216 21:16:45.294461 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:16:46 crc kubenswrapper[4811]: I0216 21:16:46.295617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f889b0d1-bc4c-4eeb-a4bf-789d313c1055","Type":"ContainerStarted","Data":"0bcf41f75c40f955e2490f8bc2547b16df03fb106ef5d12e2d4c471107841c84"} Feb 16 21:16:47 crc kubenswrapper[4811]: I0216 21:16:47.307826 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f889b0d1-bc4c-4eeb-a4bf-789d313c1055","Type":"ContainerStarted","Data":"002b6ba602630d02be43427ba519225ed5a952ddd8e97013284a22bbf7e9db82"} Feb 16 21:16:48 crc kubenswrapper[4811]: I0216 21:16:48.321076 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f889b0d1-bc4c-4eeb-a4bf-789d313c1055","Type":"ContainerStarted","Data":"e17b817f081ed532301a9c2ed23e79a12261f25332fcfc9c597fc29865a3bf95"} Feb 16 21:16:49 crc kubenswrapper[4811]: I0216 21:16:49.341217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f889b0d1-bc4c-4eeb-a4bf-789d313c1055","Type":"ContainerStarted","Data":"78cfa2a201256f731f1bab1c01bc6069bb598a963cecb0942cfe7a3d0933696b"} Feb 16 21:16:49 crc kubenswrapper[4811]: I0216 21:16:49.342786 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:16:52 crc kubenswrapper[4811]: E0216 21:16:52.716310 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:16:52 crc kubenswrapper[4811]: I0216 21:16:52.739433 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.496907571 podStartE2EDuration="8.739412947s" podCreationTimestamp="2026-02-16 21:16:44 +0000 UTC" firstStartedPulling="2026-02-16 21:16:45.227834459 +0000 UTC m=+1223.157130417" lastFinishedPulling="2026-02-16 21:16:48.470339845 +0000 UTC m=+1226.399635793" observedRunningTime="2026-02-16 21:16:49.371984351 +0000 UTC m=+1227.301280349" watchObservedRunningTime="2026-02-16 21:16:52.739412947 +0000 UTC m=+1230.668708895" Feb 16 21:17:05 crc kubenswrapper[4811]: E0216 21:17:05.704728 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:17:14 crc kubenswrapper[4811]: I0216 21:17:14.732267 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 21:17:17 crc kubenswrapper[4811]: E0216 21:17:17.706139 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:17:31 crc kubenswrapper[4811]: E0216 21:17:31.705550 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:17:42 crc kubenswrapper[4811]: E0216 21:17:42.715770 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:17:48 crc kubenswrapper[4811]: I0216 21:17:48.364181 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:17:48 crc kubenswrapper[4811]: I0216 21:17:48.364800 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:17:54 crc kubenswrapper[4811]: E0216 21:17:54.706667 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:18:09 crc kubenswrapper[4811]: E0216 21:18:09.705502 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:18:18 crc kubenswrapper[4811]: I0216 21:18:18.363670 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:18:18 crc kubenswrapper[4811]: I0216 21:18:18.364382 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:18:22 crc kubenswrapper[4811]: E0216 21:18:22.723150 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:18:33 crc kubenswrapper[4811]: E0216 21:18:33.707595 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.363999 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.364511 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.364568 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.365430 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38ec19e15b9324f2ccde21c32410034a04474118800f86b56f7b258842a5727e"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.365485 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://38ec19e15b9324f2ccde21c32410034a04474118800f86b56f7b258842a5727e" gracePeriod=600 Feb 16 21:18:48 crc kubenswrapper[4811]: E0216 21:18:48.705522 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.874762 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="38ec19e15b9324f2ccde21c32410034a04474118800f86b56f7b258842a5727e" exitCode=0 Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.874804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"38ec19e15b9324f2ccde21c32410034a04474118800f86b56f7b258842a5727e"} Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.874829 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb"} Feb 16 21:18:48 crc kubenswrapper[4811]: I0216 21:18:48.874843 4811 scope.go:117] "RemoveContainer" containerID="c5a0cef66cb330788b58ea1a5723377ba1dc93aa2016d4d0b1ec1df645e788ff" Feb 16 21:19:03 crc kubenswrapper[4811]: E0216 21:19:03.705568 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:19:15 crc kubenswrapper[4811]: E0216 21:19:15.828805 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:19:15 crc kubenswrapper[4811]: E0216 21:19:15.829552 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:19:15 crc kubenswrapper[4811]: E0216 21:19:15.829764 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:19:15 crc kubenswrapper[4811]: E0216 21:19:15.830998 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:19:30 crc kubenswrapper[4811]: E0216 21:19:30.705851 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:19:42 crc kubenswrapper[4811]: E0216 21:19:42.727724 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:19:51 crc kubenswrapper[4811]: I0216 21:19:51.061370 4811 scope.go:117] "RemoveContainer" containerID="1e012c4ca23120a3cf3ac1134d9b440249bbfd71c2eb5c54c03ebb045a776dd0" Feb 16 21:19:51 crc kubenswrapper[4811]: I0216 21:19:51.105424 4811 scope.go:117] "RemoveContainer" containerID="2738f6783b3629445dab537bceac537d8bccdadccc2e8069fd323a0857e3381f" Feb 16 21:19:51 crc kubenswrapper[4811]: I0216 21:19:51.153436 4811 scope.go:117] "RemoveContainer" containerID="8dd8c402b8048ef6a4f3f27495097c1a76f9e7ad1777f0d4c60d692eae2434fd" Feb 16 21:19:51 crc kubenswrapper[4811]: I0216 21:19:51.228095 4811 scope.go:117] "RemoveContainer" containerID="9a477de0b014de404d9e6cb9a882bf2bda550241dd85bfab31a0882aa33b358e" Feb 16 21:19:55 crc kubenswrapper[4811]: E0216 21:19:55.704689 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:20:09 crc kubenswrapper[4811]: E0216 21:20:09.704560 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:20:20 crc kubenswrapper[4811]: E0216 21:20:20.705533 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:20:33 crc kubenswrapper[4811]: E0216 21:20:33.705291 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:20:44 crc kubenswrapper[4811]: E0216 21:20:44.707262 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:20:48 crc kubenswrapper[4811]: I0216 21:20:48.364327 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:20:48 crc kubenswrapper[4811]: I0216 21:20:48.365239 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:20:51 crc kubenswrapper[4811]: I0216 21:20:51.362067 4811 scope.go:117] "RemoveContainer" containerID="ac659e6135f2024beb4cdbe96cf7d40ce7760d91454336670aa36eae385eb2cb" Feb 16 21:20:51 crc kubenswrapper[4811]: I0216 21:20:51.395675 4811 scope.go:117] "RemoveContainer" containerID="22e42a6660413f158e6590b5ff4b4d4e7bb05829329b7abc71383a539c5a63cd" Feb 16 21:20:51 crc kubenswrapper[4811]: I0216 21:20:51.423636 4811 scope.go:117] "RemoveContainer" containerID="1b67498397efda23989c5ad9ff1328c369c7fe3142af38d1d41a9f38ae7aa197" Feb 16 21:20:51 crc kubenswrapper[4811]: I0216 21:20:51.459666 4811 scope.go:117] "RemoveContainer" containerID="6dd7ff72a64e573211cd03956e1e245e6b60ba617515e7e6149ec48d89339f85" Feb 16 21:20:59 crc kubenswrapper[4811]: E0216 21:20:59.705563 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:21:11 crc kubenswrapper[4811]: E0216 21:21:11.705439 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:21:18 crc kubenswrapper[4811]: I0216 21:21:18.363807 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:21:18 crc kubenswrapper[4811]: I0216 21:21:18.364293 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:21:22 crc kubenswrapper[4811]: E0216 21:21:22.716792 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:21:37 crc kubenswrapper[4811]: E0216 21:21:37.705457 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.364238 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.364852 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.364915 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.365795 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.365852 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" gracePeriod=600 Feb 16 21:21:48 crc kubenswrapper[4811]: E0216 21:21:48.493863 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.580507 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9xvcb"] Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.582559 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.592094 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xvcb"] Feb 16 21:21:48 crc kubenswrapper[4811]: E0216 21:21:48.705937 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.719687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-catalog-content\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.719834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-utilities\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.719864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhqqc\" (UniqueName: \"kubernetes.io/projected/68f67260-39f4-40da-8524-a89533be3a45-kube-api-access-rhqqc\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.821408 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-utilities\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.821460 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhqqc\" (UniqueName: \"kubernetes.io/projected/68f67260-39f4-40da-8524-a89533be3a45-kube-api-access-rhqqc\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.821640 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-catalog-content\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.822297 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-catalog-content\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.822843 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-utilities\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.849746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhqqc\" (UniqueName: \"kubernetes.io/projected/68f67260-39f4-40da-8524-a89533be3a45-kube-api-access-rhqqc\") pod \"redhat-operators-9xvcb\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:48 crc kubenswrapper[4811]: I0216 21:21:48.945578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:49 crc kubenswrapper[4811]: I0216 21:21:49.138551 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" exitCode=0 Feb 16 21:21:49 crc kubenswrapper[4811]: I0216 21:21:49.138586 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb"} Feb 16 21:21:49 crc kubenswrapper[4811]: I0216 21:21:49.138636 4811 scope.go:117] "RemoveContainer" containerID="38ec19e15b9324f2ccde21c32410034a04474118800f86b56f7b258842a5727e" Feb 16 21:21:49 crc kubenswrapper[4811]: I0216 21:21:49.139377 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:21:49 crc kubenswrapper[4811]: E0216 21:21:49.139741 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:21:49 crc kubenswrapper[4811]: I0216 21:21:49.487207 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xvcb"] Feb 16 21:21:50 crc kubenswrapper[4811]: I0216 21:21:50.154592 4811 generic.go:334] "Generic (PLEG): container finished" podID="68f67260-39f4-40da-8524-a89533be3a45" containerID="e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638" exitCode=0 Feb 16 21:21:50 crc kubenswrapper[4811]: I0216 21:21:50.154694 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerDied","Data":"e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638"} Feb 16 21:21:50 crc kubenswrapper[4811]: I0216 21:21:50.154864 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerStarted","Data":"11cd07cfec929c9ff83443037824725870e798969a29c9b65c72418aed44ebae"} Feb 16 21:21:50 crc kubenswrapper[4811]: I0216 21:21:50.156346 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:21:52 crc kubenswrapper[4811]: I0216 21:21:52.180413 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerStarted","Data":"07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37"} Feb 16 21:21:53 crc kubenswrapper[4811]: I0216 21:21:53.194638 4811 generic.go:334] "Generic (PLEG): container finished" podID="68f67260-39f4-40da-8524-a89533be3a45" containerID="07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37" exitCode=0 Feb 16 21:21:53 crc kubenswrapper[4811]: I0216 21:21:53.194738 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerDied","Data":"07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37"} Feb 16 21:21:54 crc kubenswrapper[4811]: I0216 21:21:54.208122 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerStarted","Data":"13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6"} Feb 16 21:21:54 crc kubenswrapper[4811]: I0216 21:21:54.235073 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9xvcb" podStartSLOduration=2.762450282 podStartE2EDuration="6.235048938s" podCreationTimestamp="2026-02-16 21:21:48 +0000 UTC" firstStartedPulling="2026-02-16 21:21:50.156147214 +0000 UTC m=+1528.085443142" lastFinishedPulling="2026-02-16 21:21:53.62874582 +0000 UTC m=+1531.558041798" observedRunningTime="2026-02-16 21:21:54.226503107 +0000 UTC m=+1532.155799045" watchObservedRunningTime="2026-02-16 21:21:54.235048938 +0000 UTC m=+1532.164344886" Feb 16 21:21:55 crc kubenswrapper[4811]: I0216 21:21:55.962830 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z546j"] Feb 16 21:21:55 crc kubenswrapper[4811]: I0216 21:21:55.966556 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:55 crc kubenswrapper[4811]: I0216 21:21:55.978539 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z546j"] Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.070176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-utilities\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.070310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmdf\" (UniqueName: \"kubernetes.io/projected/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-kube-api-access-cgmdf\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.070373 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-catalog-content\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.172108 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-utilities\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.172214 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmdf\" (UniqueName: \"kubernetes.io/projected/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-kube-api-access-cgmdf\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.172271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-catalog-content\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.172808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-catalog-content\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.173051 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-utilities\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.193232 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmdf\" (UniqueName: \"kubernetes.io/projected/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-kube-api-access-cgmdf\") pod \"redhat-marketplace-z546j\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.307055 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:21:56 crc kubenswrapper[4811]: I0216 21:21:56.816494 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z546j"] Feb 16 21:21:57 crc kubenswrapper[4811]: I0216 21:21:57.241709 4811 generic.go:334] "Generic (PLEG): container finished" podID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerID="e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923" exitCode=0 Feb 16 21:21:57 crc kubenswrapper[4811]: I0216 21:21:57.241772 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerDied","Data":"e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923"} Feb 16 21:21:57 crc kubenswrapper[4811]: I0216 21:21:57.242109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerStarted","Data":"de2c3aa65bf222599b3eb7c421edd1d02b46b87137f645af103d1c517cd51912"} Feb 16 21:21:58 crc kubenswrapper[4811]: I0216 21:21:58.252758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerStarted","Data":"abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234"} Feb 16 21:21:58 crc kubenswrapper[4811]: I0216 21:21:58.946735 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:58 crc kubenswrapper[4811]: I0216 21:21:58.947249 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:21:59 crc kubenswrapper[4811]: I0216 21:21:59.266608 4811 generic.go:334] "Generic (PLEG): container finished" podID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerID="abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234" exitCode=0 Feb 16 21:21:59 crc kubenswrapper[4811]: I0216 21:21:59.266648 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerDied","Data":"abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234"} Feb 16 21:21:59 crc kubenswrapper[4811]: E0216 21:21:59.709507 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:22:00 crc kubenswrapper[4811]: I0216 21:22:00.021456 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9xvcb" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="registry-server" probeResult="failure" output=< Feb 16 21:22:00 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 21:22:00 crc kubenswrapper[4811]: > Feb 16 21:22:00 crc kubenswrapper[4811]: I0216 21:22:00.278598 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerStarted","Data":"a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e"} Feb 16 21:22:00 crc kubenswrapper[4811]: I0216 21:22:00.302424 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z546j" podStartSLOduration=2.835683346 podStartE2EDuration="5.302401124s" podCreationTimestamp="2026-02-16 21:21:55 +0000 UTC" firstStartedPulling="2026-02-16 21:21:57.243614731 +0000 UTC m=+1535.172910679" lastFinishedPulling="2026-02-16 21:21:59.710332529 +0000 UTC m=+1537.639628457" observedRunningTime="2026-02-16 21:22:00.297433282 +0000 UTC m=+1538.226729230" watchObservedRunningTime="2026-02-16 21:22:00.302401124 +0000 UTC m=+1538.231697062" Feb 16 21:22:01 crc kubenswrapper[4811]: I0216 21:22:01.703240 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:22:01 crc kubenswrapper[4811]: E0216 21:22:01.703899 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:22:06 crc kubenswrapper[4811]: I0216 21:22:06.307399 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:22:06 crc kubenswrapper[4811]: I0216 21:22:06.308795 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:22:06 crc kubenswrapper[4811]: I0216 21:22:06.405351 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:22:06 crc kubenswrapper[4811]: I0216 21:22:06.502893 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:22:06 crc kubenswrapper[4811]: I0216 21:22:06.651847 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z546j"] Feb 16 21:22:08 crc kubenswrapper[4811]: I0216 21:22:08.422010 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z546j" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="registry-server" containerID="cri-o://a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e" gracePeriod=2 Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.014527 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.039814 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.116227 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.186380 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-catalog-content\") pod \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.186448 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmdf\" (UniqueName: \"kubernetes.io/projected/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-kube-api-access-cgmdf\") pod \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.186579 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-utilities\") pod \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\" (UID: \"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd\") " Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.188214 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-utilities" (OuterVolumeSpecName: "utilities") pod "bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" (UID: "bcc4ebcd-ad62-4383-8010-6f2691fd5bcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.199356 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-kube-api-access-cgmdf" (OuterVolumeSpecName: "kube-api-access-cgmdf") pod "bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" (UID: "bcc4ebcd-ad62-4383-8010-6f2691fd5bcd"). InnerVolumeSpecName "kube-api-access-cgmdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.216119 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" (UID: "bcc4ebcd-ad62-4383-8010-6f2691fd5bcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.288333 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.288365 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmdf\" (UniqueName: \"kubernetes.io/projected/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-kube-api-access-cgmdf\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.288383 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.458691 4811 generic.go:334] "Generic (PLEG): container finished" podID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerID="a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e" exitCode=0 Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.458745 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z546j" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.458785 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerDied","Data":"a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e"} Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.458814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z546j" event={"ID":"bcc4ebcd-ad62-4383-8010-6f2691fd5bcd","Type":"ContainerDied","Data":"de2c3aa65bf222599b3eb7c421edd1d02b46b87137f645af103d1c517cd51912"} Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.458832 4811 scope.go:117] "RemoveContainer" containerID="a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.490276 4811 scope.go:117] "RemoveContainer" containerID="abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.504281 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z546j"] Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.516147 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z546j"] Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.522338 4811 scope.go:117] "RemoveContainer" containerID="e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.572358 4811 scope.go:117] "RemoveContainer" containerID="a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e" Feb 16 21:22:09 crc kubenswrapper[4811]: E0216 21:22:09.572960 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e\": container with ID starting with a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e not found: ID does not exist" containerID="a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.573009 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e"} err="failed to get container status \"a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e\": rpc error: code = NotFound desc = could not find container \"a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e\": container with ID starting with a12b78feaedf63bd34581c2a98ec1cddf7a5b3e96b0b67fcd36249cc3cc7353e not found: ID does not exist" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.573040 4811 scope.go:117] "RemoveContainer" containerID="abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234" Feb 16 21:22:09 crc kubenswrapper[4811]: E0216 21:22:09.573439 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234\": container with ID starting with abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234 not found: ID does not exist" containerID="abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.573471 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234"} err="failed to get container status \"abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234\": rpc error: code = NotFound desc = could not find container \"abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234\": container with ID starting with abbaaf4564b358b0d7de6a3130b2d3d3d56671473cabfe8f38ea4bac56eca234 not found: ID does not exist" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.573493 4811 scope.go:117] "RemoveContainer" containerID="e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923" Feb 16 21:22:09 crc kubenswrapper[4811]: E0216 21:22:09.573831 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923\": container with ID starting with e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923 not found: ID does not exist" containerID="e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923" Feb 16 21:22:09 crc kubenswrapper[4811]: I0216 21:22:09.573852 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923"} err="failed to get container status \"e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923\": rpc error: code = NotFound desc = could not find container \"e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923\": container with ID starting with e49f2589a25ac377d99b4e532292795248481a9933b10dcdb863533d2eae4923 not found: ID does not exist" Feb 16 21:22:10 crc kubenswrapper[4811]: I0216 21:22:10.451017 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xvcb"] Feb 16 21:22:10 crc kubenswrapper[4811]: I0216 21:22:10.543798 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9xvcb" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="registry-server" containerID="cri-o://13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6" gracePeriod=2 Feb 16 21:22:10 crc kubenswrapper[4811]: I0216 21:22:10.715854 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" path="/var/lib/kubelet/pods/bcc4ebcd-ad62-4383-8010-6f2691fd5bcd/volumes" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.069408 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.121963 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhqqc\" (UniqueName: \"kubernetes.io/projected/68f67260-39f4-40da-8524-a89533be3a45-kube-api-access-rhqqc\") pod \"68f67260-39f4-40da-8524-a89533be3a45\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.122048 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-catalog-content\") pod \"68f67260-39f4-40da-8524-a89533be3a45\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.122118 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-utilities\") pod \"68f67260-39f4-40da-8524-a89533be3a45\" (UID: \"68f67260-39f4-40da-8524-a89533be3a45\") " Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.123083 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-utilities" (OuterVolumeSpecName: "utilities") pod "68f67260-39f4-40da-8524-a89533be3a45" (UID: "68f67260-39f4-40da-8524-a89533be3a45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.130945 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f67260-39f4-40da-8524-a89533be3a45-kube-api-access-rhqqc" (OuterVolumeSpecName: "kube-api-access-rhqqc") pod "68f67260-39f4-40da-8524-a89533be3a45" (UID: "68f67260-39f4-40da-8524-a89533be3a45"). InnerVolumeSpecName "kube-api-access-rhqqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.224054 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhqqc\" (UniqueName: \"kubernetes.io/projected/68f67260-39f4-40da-8524-a89533be3a45-kube-api-access-rhqqc\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.224093 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.263758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68f67260-39f4-40da-8524-a89533be3a45" (UID: "68f67260-39f4-40da-8524-a89533be3a45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.326072 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f67260-39f4-40da-8524-a89533be3a45-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.566306 4811 generic.go:334] "Generic (PLEG): container finished" podID="68f67260-39f4-40da-8524-a89533be3a45" containerID="13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6" exitCode=0 Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.566509 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerDied","Data":"13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6"} Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.566606 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xvcb" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.566632 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xvcb" event={"ID":"68f67260-39f4-40da-8524-a89533be3a45","Type":"ContainerDied","Data":"11cd07cfec929c9ff83443037824725870e798969a29c9b65c72418aed44ebae"} Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.566660 4811 scope.go:117] "RemoveContainer" containerID="13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.602233 4811 scope.go:117] "RemoveContainer" containerID="07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.609896 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xvcb"] Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.622524 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9xvcb"] Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.639116 4811 scope.go:117] "RemoveContainer" containerID="e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.683789 4811 scope.go:117] "RemoveContainer" containerID="13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6" Feb 16 21:22:11 crc kubenswrapper[4811]: E0216 21:22:11.684281 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6\": container with ID starting with 13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6 not found: ID does not exist" containerID="13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.684313 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6"} err="failed to get container status \"13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6\": rpc error: code = NotFound desc = could not find container \"13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6\": container with ID starting with 13a18e9070609389ae166a1f11ed74af8336a8d8c456847402dfca8799fa27b6 not found: ID does not exist" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.684333 4811 scope.go:117] "RemoveContainer" containerID="07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37" Feb 16 21:22:11 crc kubenswrapper[4811]: E0216 21:22:11.684669 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37\": container with ID starting with 07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37 not found: ID does not exist" containerID="07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.684691 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37"} err="failed to get container status \"07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37\": rpc error: code = NotFound desc = could not find container \"07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37\": container with ID starting with 07ce6bb5f21095e71d751a21f294f359b7d6f5b601ccd41b8504b6a9c006aa37 not found: ID does not exist" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.684706 4811 scope.go:117] "RemoveContainer" containerID="e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638" Feb 16 21:22:11 crc kubenswrapper[4811]: E0216 21:22:11.685058 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638\": container with ID starting with e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638 not found: ID does not exist" containerID="e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638" Feb 16 21:22:11 crc kubenswrapper[4811]: I0216 21:22:11.685107 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638"} err="failed to get container status \"e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638\": rpc error: code = NotFound desc = could not find container \"e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638\": container with ID starting with e558dda91316f75e14acae6f4dbccc97fce5726d7cf3f983c10a250e0515d638 not found: ID does not exist" Feb 16 21:22:12 crc kubenswrapper[4811]: I0216 21:22:12.717613 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f67260-39f4-40da-8524-a89533be3a45" path="/var/lib/kubelet/pods/68f67260-39f4-40da-8524-a89533be3a45/volumes" Feb 16 21:22:14 crc kubenswrapper[4811]: I0216 21:22:14.703507 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:22:14 crc kubenswrapper[4811]: E0216 21:22:14.704232 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:22:14 crc kubenswrapper[4811]: E0216 21:22:14.706603 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:22:26 crc kubenswrapper[4811]: E0216 21:22:26.704904 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:22:27 crc kubenswrapper[4811]: I0216 21:22:27.703262 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:22:27 crc kubenswrapper[4811]: E0216 21:22:27.703703 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:22:37 crc kubenswrapper[4811]: E0216 21:22:37.705373 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:22:38 crc kubenswrapper[4811]: I0216 21:22:38.702593 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:22:38 crc kubenswrapper[4811]: E0216 21:22:38.703177 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.279641 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hksk8"] Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.280561 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="extract-content" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280575 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="extract-content" Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.280588 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="registry-server" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280593 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="registry-server" Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.280617 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="registry-server" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280624 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="registry-server" Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.280640 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="extract-content" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280646 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="extract-content" Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.280658 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="extract-utilities" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280664 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="extract-utilities" Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.280677 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="extract-utilities" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280683 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="extract-utilities" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280878 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f67260-39f4-40da-8524-a89533be3a45" containerName="registry-server" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.280895 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc4ebcd-ad62-4383-8010-6f2691fd5bcd" containerName="registry-server" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.282437 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.288596 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hksk8"] Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.346296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-catalog-content\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.346361 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz7fr\" (UniqueName: \"kubernetes.io/projected/2e112d09-ccaa-413b-a2a4-533775de34f8-kube-api-access-rz7fr\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.346422 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-utilities\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.448711 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-catalog-content\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.448773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz7fr\" (UniqueName: \"kubernetes.io/projected/2e112d09-ccaa-413b-a2a4-533775de34f8-kube-api-access-rz7fr\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.448812 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-utilities\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.449355 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-catalog-content\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.449379 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-utilities\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.471231 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz7fr\" (UniqueName: \"kubernetes.io/projected/2e112d09-ccaa-413b-a2a4-533775de34f8-kube-api-access-rz7fr\") pod \"certified-operators-hksk8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.611651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:22:50 crc kubenswrapper[4811]: I0216 21:22:50.705023 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:22:50 crc kubenswrapper[4811]: E0216 21:22:50.705370 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:22:51 crc kubenswrapper[4811]: I0216 21:22:51.120043 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hksk8"] Feb 16 21:22:52 crc kubenswrapper[4811]: I0216 21:22:52.031249 4811 generic.go:334] "Generic (PLEG): container finished" podID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerID="b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089" exitCode=0 Feb 16 21:22:52 crc kubenswrapper[4811]: I0216 21:22:52.031334 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerDied","Data":"b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089"} Feb 16 21:22:52 crc kubenswrapper[4811]: I0216 21:22:52.031672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerStarted","Data":"635c728406c60296709e0c02c0149653f32bddf523003b0accb3516f8f6ac178"} Feb 16 21:22:52 crc kubenswrapper[4811]: E0216 21:22:52.714530 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:22:53 crc kubenswrapper[4811]: I0216 21:22:53.044801 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerStarted","Data":"84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e"} Feb 16 21:22:55 crc kubenswrapper[4811]: I0216 21:22:55.075316 4811 generic.go:334] "Generic (PLEG): container finished" podID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerID="84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e" exitCode=0 Feb 16 21:22:55 crc kubenswrapper[4811]: I0216 21:22:55.075427 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerDied","Data":"84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e"} Feb 16 21:22:56 crc kubenswrapper[4811]: I0216 21:22:56.086417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerStarted","Data":"61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441"} Feb 16 21:22:56 crc kubenswrapper[4811]: I0216 21:22:56.111761 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hksk8" podStartSLOduration=2.4174850660000002 podStartE2EDuration="6.111744408s" podCreationTimestamp="2026-02-16 21:22:50 +0000 UTC" firstStartedPulling="2026-02-16 21:22:52.034158756 +0000 UTC m=+1589.963454694" lastFinishedPulling="2026-02-16 21:22:55.728418098 +0000 UTC m=+1593.657714036" observedRunningTime="2026-02-16 21:22:56.104229292 +0000 UTC m=+1594.033525260" watchObservedRunningTime="2026-02-16 21:22:56.111744408 +0000 UTC m=+1594.041040346" Feb 16 21:22:58 crc kubenswrapper[4811]: I0216 21:22:58.040026 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lmvpb"] Feb 16 21:22:58 crc kubenswrapper[4811]: I0216 21:22:58.050576 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lmvpb"] Feb 16 21:22:58 crc kubenswrapper[4811]: I0216 21:22:58.722694 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15286204-6ffc-4f13-aacb-8c231edf893d" path="/var/lib/kubelet/pods/15286204-6ffc-4f13-aacb-8c231edf893d/volumes" Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.048331 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mstfh"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.065069 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1afa-account-create-update-nl8r4"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.075713 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4b4c-account-create-update-q54gf"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.083495 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xw259"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.092815 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mstfh"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.100998 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6610-account-create-update-brzlq"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.108936 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4b4c-account-create-update-q54gf"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.118443 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1afa-account-create-update-nl8r4"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.128564 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6610-account-create-update-brzlq"] Feb 16 21:22:59 crc kubenswrapper[4811]: I0216 21:22:59.136772 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xw259"] Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.612443 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.612706 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.672912 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.716929 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b8148b-cf17-4592-8583-edb4ccedca18" path="/var/lib/kubelet/pods/48b8148b-cf17-4592-8583-edb4ccedca18/volumes" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.718167 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aed10ff-a730-4ac8-88c7-395a71b9554b" path="/var/lib/kubelet/pods/4aed10ff-a730-4ac8-88c7-395a71b9554b/volumes" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.719484 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6922a5b7-d2e7-489e-b42d-1a54a1d85b6a" path="/var/lib/kubelet/pods/6922a5b7-d2e7-489e-b42d-1a54a1d85b6a/volumes" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.720672 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eebc5893-8007-4da8-8e04-9c54d1a7b57c" path="/var/lib/kubelet/pods/eebc5893-8007-4da8-8e04-9c54d1a7b57c/volumes" Feb 16 21:23:00 crc kubenswrapper[4811]: I0216 21:23:00.722805 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f556f9d0-3444-46b3-b435-dcf08cf76c0c" path="/var/lib/kubelet/pods/f556f9d0-3444-46b3-b435-dcf08cf76c0c/volumes" Feb 16 21:23:01 crc kubenswrapper[4811]: I0216 21:23:01.242990 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:23:01 crc kubenswrapper[4811]: I0216 21:23:01.309252 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hksk8"] Feb 16 21:23:01 crc kubenswrapper[4811]: I0216 21:23:01.702982 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:23:01 crc kubenswrapper[4811]: E0216 21:23:01.703507 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.179815 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hksk8" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="registry-server" containerID="cri-o://61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441" gracePeriod=2 Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.726485 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.750892 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz7fr\" (UniqueName: \"kubernetes.io/projected/2e112d09-ccaa-413b-a2a4-533775de34f8-kube-api-access-rz7fr\") pod \"2e112d09-ccaa-413b-a2a4-533775de34f8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.750960 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-catalog-content\") pod \"2e112d09-ccaa-413b-a2a4-533775de34f8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.751158 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-utilities\") pod \"2e112d09-ccaa-413b-a2a4-533775de34f8\" (UID: \"2e112d09-ccaa-413b-a2a4-533775de34f8\") " Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.754682 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-utilities" (OuterVolumeSpecName: "utilities") pod "2e112d09-ccaa-413b-a2a4-533775de34f8" (UID: "2e112d09-ccaa-413b-a2a4-533775de34f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.765615 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e112d09-ccaa-413b-a2a4-533775de34f8-kube-api-access-rz7fr" (OuterVolumeSpecName: "kube-api-access-rz7fr") pod "2e112d09-ccaa-413b-a2a4-533775de34f8" (UID: "2e112d09-ccaa-413b-a2a4-533775de34f8"). InnerVolumeSpecName "kube-api-access-rz7fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.841870 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e112d09-ccaa-413b-a2a4-533775de34f8" (UID: "2e112d09-ccaa-413b-a2a4-533775de34f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.855530 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz7fr\" (UniqueName: \"kubernetes.io/projected/2e112d09-ccaa-413b-a2a4-533775de34f8-kube-api-access-rz7fr\") on node \"crc\" DevicePath \"\"" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.855573 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:23:03 crc kubenswrapper[4811]: I0216 21:23:03.855587 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e112d09-ccaa-413b-a2a4-533775de34f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.191117 4811 generic.go:334] "Generic (PLEG): container finished" podID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerID="61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441" exitCode=0 Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.191186 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerDied","Data":"61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441"} Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.191284 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hksk8" event={"ID":"2e112d09-ccaa-413b-a2a4-533775de34f8","Type":"ContainerDied","Data":"635c728406c60296709e0c02c0149653f32bddf523003b0accb3516f8f6ac178"} Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.191318 4811 scope.go:117] "RemoveContainer" containerID="61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.192688 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hksk8" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.230475 4811 scope.go:117] "RemoveContainer" containerID="84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.251263 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hksk8"] Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.265171 4811 scope.go:117] "RemoveContainer" containerID="b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.266852 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hksk8"] Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.314364 4811 scope.go:117] "RemoveContainer" containerID="61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441" Feb 16 21:23:04 crc kubenswrapper[4811]: E0216 21:23:04.314843 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441\": container with ID starting with 61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441 not found: ID does not exist" containerID="61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.314884 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441"} err="failed to get container status \"61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441\": rpc error: code = NotFound desc = could not find container \"61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441\": container with ID starting with 61ac0f334fceecbe5642927f4fb2d3ec2acdef33caefecdf7c62ce1842136441 not found: ID does not exist" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.314911 4811 scope.go:117] "RemoveContainer" containerID="84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e" Feb 16 21:23:04 crc kubenswrapper[4811]: E0216 21:23:04.315229 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e\": container with ID starting with 84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e not found: ID does not exist" containerID="84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.315265 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e"} err="failed to get container status \"84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e\": rpc error: code = NotFound desc = could not find container \"84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e\": container with ID starting with 84ebfd8ca2046b8a790cd6f4c46175bbb96d8cc16648b40929270595d9c8389e not found: ID does not exist" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.315305 4811 scope.go:117] "RemoveContainer" containerID="b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089" Feb 16 21:23:04 crc kubenswrapper[4811]: E0216 21:23:04.315726 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089\": container with ID starting with b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089 not found: ID does not exist" containerID="b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.315754 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089"} err="failed to get container status \"b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089\": rpc error: code = NotFound desc = could not find container \"b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089\": container with ID starting with b0fa859a3ee229192bb7844875fa8a70a0bb9a1f116a570d9b257d719b070089 not found: ID does not exist" Feb 16 21:23:04 crc kubenswrapper[4811]: I0216 21:23:04.715534 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" path="/var/lib/kubelet/pods/2e112d09-ccaa-413b-a2a4-533775de34f8/volumes" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.328616 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gqmj"] Feb 16 21:23:06 crc kubenswrapper[4811]: E0216 21:23:06.329545 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="registry-server" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.329569 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="registry-server" Feb 16 21:23:06 crc kubenswrapper[4811]: E0216 21:23:06.329600 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="extract-utilities" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.329613 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="extract-utilities" Feb 16 21:23:06 crc kubenswrapper[4811]: E0216 21:23:06.329654 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="extract-content" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.329666 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="extract-content" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.330050 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e112d09-ccaa-413b-a2a4-533775de34f8" containerName="registry-server" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.332581 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.361223 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gqmj"] Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.417635 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-catalog-content\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.417728 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-utilities\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.417775 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddbg\" (UniqueName: \"kubernetes.io/projected/153361c7-730c-4a48-b920-e0596d43fe17-kube-api-access-2ddbg\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.521821 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-catalog-content\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.522078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-utilities\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.522173 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddbg\" (UniqueName: \"kubernetes.io/projected/153361c7-730c-4a48-b920-e0596d43fe17-kube-api-access-2ddbg\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.522495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-catalog-content\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.522518 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-utilities\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.554696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddbg\" (UniqueName: \"kubernetes.io/projected/153361c7-730c-4a48-b920-e0596d43fe17-kube-api-access-2ddbg\") pod \"community-operators-8gqmj\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: I0216 21:23:06.662665 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:06 crc kubenswrapper[4811]: E0216 21:23:06.706258 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:23:07 crc kubenswrapper[4811]: I0216 21:23:07.306591 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gqmj"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.055332 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7732-account-create-update-xfp4c"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.068414 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-52hns"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.080979 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-g4gc5"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.092903 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-2f7e-account-create-update-ssxfp"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.103386 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-52hns"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.113281 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7732-account-create-update-xfp4c"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.128247 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-g4gc5"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.149173 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-2f7e-account-create-update-ssxfp"] Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.242230 4811 generic.go:334] "Generic (PLEG): container finished" podID="153361c7-730c-4a48-b920-e0596d43fe17" containerID="52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65" exitCode=0 Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.242278 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerDied","Data":"52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65"} Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.242304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerStarted","Data":"8f82da6a7e28eea614cfbd9e38168b02515e46be2e08f4f995dae1d4b6472db0"} Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.725453 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313d0e82-09f0-4085-ac8b-9eafe564b8ec" path="/var/lib/kubelet/pods/313d0e82-09f0-4085-ac8b-9eafe564b8ec/volumes" Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.727792 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609199ec-a876-41fd-835a-826bb246817d" path="/var/lib/kubelet/pods/609199ec-a876-41fd-835a-826bb246817d/volumes" Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.729671 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706cf667-a9da-4c0b-b0c2-8938db9f1b8c" path="/var/lib/kubelet/pods/706cf667-a9da-4c0b-b0c2-8938db9f1b8c/volumes" Feb 16 21:23:08 crc kubenswrapper[4811]: I0216 21:23:08.731877 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5848b7-b291-4c54-a226-dfd4eedbea37" path="/var/lib/kubelet/pods/fe5848b7-b291-4c54-a226-dfd4eedbea37/volumes" Feb 16 21:23:09 crc kubenswrapper[4811]: I0216 21:23:09.252108 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerStarted","Data":"1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30"} Feb 16 21:23:10 crc kubenswrapper[4811]: I0216 21:23:10.269390 4811 generic.go:334] "Generic (PLEG): container finished" podID="153361c7-730c-4a48-b920-e0596d43fe17" containerID="1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30" exitCode=0 Feb 16 21:23:10 crc kubenswrapper[4811]: I0216 21:23:10.269476 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerDied","Data":"1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30"} Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.036652 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-n7dp7"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.048658 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vx8vr"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.057336 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ac65-account-create-update-cg9vg"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.065246 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0e26-account-create-update-28c89"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.073146 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ac65-account-create-update-cg9vg"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.081323 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0e26-account-create-update-28c89"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.089559 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-n7dp7"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.097957 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vx8vr"] Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.283369 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerStarted","Data":"b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055"} Feb 16 21:23:11 crc kubenswrapper[4811]: I0216 21:23:11.299689 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gqmj" podStartSLOduration=2.823390623 podStartE2EDuration="5.299666559s" podCreationTimestamp="2026-02-16 21:23:06 +0000 UTC" firstStartedPulling="2026-02-16 21:23:08.244759718 +0000 UTC m=+1606.174055656" lastFinishedPulling="2026-02-16 21:23:10.721035644 +0000 UTC m=+1608.650331592" observedRunningTime="2026-02-16 21:23:11.299411142 +0000 UTC m=+1609.228707070" watchObservedRunningTime="2026-02-16 21:23:11.299666559 +0000 UTC m=+1609.228962497" Feb 16 21:23:12 crc kubenswrapper[4811]: I0216 21:23:12.715520 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8" path="/var/lib/kubelet/pods/62091e8f-8bdf-4e43-9220-9dbcfc3a4bc8/volumes" Feb 16 21:23:12 crc kubenswrapper[4811]: I0216 21:23:12.716371 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6908fe5f-6f5a-4425-96fe-1b5d0998c02c" path="/var/lib/kubelet/pods/6908fe5f-6f5a-4425-96fe-1b5d0998c02c/volumes" Feb 16 21:23:12 crc kubenswrapper[4811]: I0216 21:23:12.716940 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d61096-bb5d-43e1-ba73-1829b343aec7" path="/var/lib/kubelet/pods/70d61096-bb5d-43e1-ba73-1829b343aec7/volumes" Feb 16 21:23:12 crc kubenswrapper[4811]: I0216 21:23:12.717493 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd3e590c-550e-4dbc-a82b-8e81ac468062" path="/var/lib/kubelet/pods/fd3e590c-550e-4dbc-a82b-8e81ac468062/volumes" Feb 16 21:23:13 crc kubenswrapper[4811]: I0216 21:23:13.703058 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:23:13 crc kubenswrapper[4811]: E0216 21:23:13.703408 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:23:16 crc kubenswrapper[4811]: I0216 21:23:16.664004 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:16 crc kubenswrapper[4811]: I0216 21:23:16.664804 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:16 crc kubenswrapper[4811]: I0216 21:23:16.732054 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:17 crc kubenswrapper[4811]: I0216 21:23:17.399982 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:17 crc kubenswrapper[4811]: I0216 21:23:17.460540 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gqmj"] Feb 16 21:23:18 crc kubenswrapper[4811]: E0216 21:23:18.706353 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:23:19 crc kubenswrapper[4811]: I0216 21:23:19.381690 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gqmj" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="registry-server" containerID="cri-o://b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055" gracePeriod=2 Feb 16 21:23:19 crc kubenswrapper[4811]: I0216 21:23:19.969434 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.029080 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ddbg\" (UniqueName: \"kubernetes.io/projected/153361c7-730c-4a48-b920-e0596d43fe17-kube-api-access-2ddbg\") pod \"153361c7-730c-4a48-b920-e0596d43fe17\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.029341 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-catalog-content\") pod \"153361c7-730c-4a48-b920-e0596d43fe17\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.029565 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-utilities\") pod \"153361c7-730c-4a48-b920-e0596d43fe17\" (UID: \"153361c7-730c-4a48-b920-e0596d43fe17\") " Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.030418 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-utilities" (OuterVolumeSpecName: "utilities") pod "153361c7-730c-4a48-b920-e0596d43fe17" (UID: "153361c7-730c-4a48-b920-e0596d43fe17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.030800 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.038436 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153361c7-730c-4a48-b920-e0596d43fe17-kube-api-access-2ddbg" (OuterVolumeSpecName: "kube-api-access-2ddbg") pod "153361c7-730c-4a48-b920-e0596d43fe17" (UID: "153361c7-730c-4a48-b920-e0596d43fe17"). InnerVolumeSpecName "kube-api-access-2ddbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.105132 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "153361c7-730c-4a48-b920-e0596d43fe17" (UID: "153361c7-730c-4a48-b920-e0596d43fe17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.132863 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ddbg\" (UniqueName: \"kubernetes.io/projected/153361c7-730c-4a48-b920-e0596d43fe17-kube-api-access-2ddbg\") on node \"crc\" DevicePath \"\"" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.132902 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153361c7-730c-4a48-b920-e0596d43fe17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.397267 4811 generic.go:334] "Generic (PLEG): container finished" podID="153361c7-730c-4a48-b920-e0596d43fe17" containerID="b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055" exitCode=0 Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.397312 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerDied","Data":"b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055"} Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.397341 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gqmj" event={"ID":"153361c7-730c-4a48-b920-e0596d43fe17","Type":"ContainerDied","Data":"8f82da6a7e28eea614cfbd9e38168b02515e46be2e08f4f995dae1d4b6472db0"} Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.397386 4811 scope.go:117] "RemoveContainer" containerID="b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.397445 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gqmj" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.444912 4811 scope.go:117] "RemoveContainer" containerID="1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.454107 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gqmj"] Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.463654 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gqmj"] Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.498580 4811 scope.go:117] "RemoveContainer" containerID="52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.547647 4811 scope.go:117] "RemoveContainer" containerID="b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055" Feb 16 21:23:20 crc kubenswrapper[4811]: E0216 21:23:20.548061 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055\": container with ID starting with b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055 not found: ID does not exist" containerID="b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.548104 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055"} err="failed to get container status \"b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055\": rpc error: code = NotFound desc = could not find container \"b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055\": container with ID starting with b574eab9f774bb6ba17b5a5b25891c83d879a2b26c827295a286a0506d094055 not found: ID does not exist" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.548132 4811 scope.go:117] "RemoveContainer" containerID="1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30" Feb 16 21:23:20 crc kubenswrapper[4811]: E0216 21:23:20.548531 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30\": container with ID starting with 1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30 not found: ID does not exist" containerID="1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.548641 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30"} err="failed to get container status \"1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30\": rpc error: code = NotFound desc = could not find container \"1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30\": container with ID starting with 1cb7a294bdef23e21777d746bb6d7f8531e46132206825ec7f20f4dcfbf92e30 not found: ID does not exist" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.548761 4811 scope.go:117] "RemoveContainer" containerID="52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65" Feb 16 21:23:20 crc kubenswrapper[4811]: E0216 21:23:20.549115 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65\": container with ID starting with 52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65 not found: ID does not exist" containerID="52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.549147 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65"} err="failed to get container status \"52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65\": rpc error: code = NotFound desc = could not find container \"52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65\": container with ID starting with 52c2ac54124ae6d5cdfb85b2751ba4a1e35cd3db2c1eebff081c1b7c6a3dcf65 not found: ID does not exist" Feb 16 21:23:20 crc kubenswrapper[4811]: I0216 21:23:20.726774 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153361c7-730c-4a48-b920-e0596d43fe17" path="/var/lib/kubelet/pods/153361c7-730c-4a48-b920-e0596d43fe17/volumes" Feb 16 21:23:22 crc kubenswrapper[4811]: I0216 21:23:22.048543 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2drks"] Feb 16 21:23:22 crc kubenswrapper[4811]: I0216 21:23:22.062600 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2drks"] Feb 16 21:23:22 crc kubenswrapper[4811]: I0216 21:23:22.718973 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe" path="/var/lib/kubelet/pods/22d0db76-86e7-4dcd-ab8f-fc9e6e5a4dfe/volumes" Feb 16 21:23:25 crc kubenswrapper[4811]: I0216 21:23:25.059008 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rmfvr"] Feb 16 21:23:25 crc kubenswrapper[4811]: I0216 21:23:25.070505 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rmfvr"] Feb 16 21:23:26 crc kubenswrapper[4811]: I0216 21:23:26.720300 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ff615c-b0ce-42f1-b01a-7a59d64dacc1" path="/var/lib/kubelet/pods/44ff615c-b0ce-42f1-b01a-7a59d64dacc1/volumes" Feb 16 21:23:27 crc kubenswrapper[4811]: I0216 21:23:27.703943 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:23:27 crc kubenswrapper[4811]: E0216 21:23:27.704360 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:23:30 crc kubenswrapper[4811]: I0216 21:23:30.034082 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fx82t"] Feb 16 21:23:30 crc kubenswrapper[4811]: I0216 21:23:30.046943 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fx82t"] Feb 16 21:23:30 crc kubenswrapper[4811]: I0216 21:23:30.720513 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2e3a6d-e105-43a4-bdae-9ef2bde0f137" path="/var/lib/kubelet/pods/3a2e3a6d-e105-43a4-bdae-9ef2bde0f137/volumes" Feb 16 21:23:32 crc kubenswrapper[4811]: E0216 21:23:32.720463 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:23:40 crc kubenswrapper[4811]: I0216 21:23:40.704272 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:23:40 crc kubenswrapper[4811]: E0216 21:23:40.705351 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:23:45 crc kubenswrapper[4811]: E0216 21:23:45.707390 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.062767 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gbrql"] Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.074231 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gbrql"] Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.588170 4811 scope.go:117] "RemoveContainer" containerID="5e299b53bfb4a24c4ae0c44540b5106081d775033193a3bf8fa260de54391459" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.625176 4811 scope.go:117] "RemoveContainer" containerID="a9f35813ce7830f429d370e997e41a109acd2dd7b168756ccd8fb8332c1b7f18" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.673573 4811 scope.go:117] "RemoveContainer" containerID="80ff26fc1997c91f922c78965c5dbdece23f16852a22210f747b539e8d734331" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.732042 4811 scope.go:117] "RemoveContainer" containerID="a432017193da7461ce95f2529a6311bca58a7a6b5b77768578d5fb55f3c5b094" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.780936 4811 scope.go:117] "RemoveContainer" containerID="5c4a9a194033c09945c90447170d839d83636f1a1a0811f1b2e47bfbb34bc1b4" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.825651 4811 scope.go:117] "RemoveContainer" containerID="907b0e27480d39716fa5f5dc2e9c5df4058467e22407359b2784de0802139c93" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.868185 4811 scope.go:117] "RemoveContainer" containerID="bc35dfaa32f2c323ce09c949a19d8a2d682b9c0061ba49203b45ef63e29fa721" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.912464 4811 scope.go:117] "RemoveContainer" containerID="a4d28d60141c5334a374a161f6cba467bce06b694fb65ecd9709368fc11b1fbe" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.934481 4811 scope.go:117] "RemoveContainer" containerID="bbfc63d03b97e0472b63b5e56415118c80696561470e41c04fa1dd9ad0a4da19" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.960745 4811 scope.go:117] "RemoveContainer" containerID="ff7537c2fb0ff2776acdbdf93f70c59ac61b4b53f56fd8d6944fc435ac925e5c" Feb 16 21:23:51 crc kubenswrapper[4811]: I0216 21:23:51.992738 4811 scope.go:117] "RemoveContainer" containerID="4e86ebccf5a7e4996125c2e1ea7759a8c95739bbb91d5e29eb8af75c958413fb" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.021117 4811 scope.go:117] "RemoveContainer" containerID="93ac4d3a1a719889246ce0c2033ac120f7e1f67b937f2537ddeac795e2776292" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.045934 4811 scope.go:117] "RemoveContainer" containerID="c6703f506da4fd7d33b0d0ca7af956e4969080dd52a4459be99d7b955ba9303a" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.071293 4811 scope.go:117] "RemoveContainer" containerID="cd4e074696862cbc4687603627e901236da328969c21dfa225478a02be826b46" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.096766 4811 scope.go:117] "RemoveContainer" containerID="44fd0337aaec1b0c5c944fa1876256e9981620e22dd378fae48706e25ee6f514" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.123560 4811 scope.go:117] "RemoveContainer" containerID="e8d738ae84353f29467794a0dc974dc64d81fd85c3ae7ded93fdf8da7ac6935a" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.162001 4811 scope.go:117] "RemoveContainer" containerID="4c980ac24f5fc3d27966e6bbc6d0dd015591904629ee92cb6128a9162992dc2d" Feb 16 21:23:52 crc kubenswrapper[4811]: I0216 21:23:52.716075 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3237b6a2-9b91-41f2-bcea-21b9f5e91f80" path="/var/lib/kubelet/pods/3237b6a2-9b91-41f2-bcea-21b9f5e91f80/volumes" Feb 16 21:23:53 crc kubenswrapper[4811]: I0216 21:23:53.703948 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:23:53 crc kubenswrapper[4811]: E0216 21:23:53.704650 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:23:55 crc kubenswrapper[4811]: I0216 21:23:55.043562 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8xm4f"] Feb 16 21:23:55 crc kubenswrapper[4811]: I0216 21:23:55.067513 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8xm4f"] Feb 16 21:23:56 crc kubenswrapper[4811]: I0216 21:23:56.726514 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3518c12b-e37a-4c8d-bbb5-c84f79d45948" path="/var/lib/kubelet/pods/3518c12b-e37a-4c8d-bbb5-c84f79d45948/volumes" Feb 16 21:23:58 crc kubenswrapper[4811]: I0216 21:23:58.060515 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l6k27"] Feb 16 21:23:58 crc kubenswrapper[4811]: I0216 21:23:58.072655 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l6k27"] Feb 16 21:23:58 crc kubenswrapper[4811]: E0216 21:23:58.706343 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:23:58 crc kubenswrapper[4811]: I0216 21:23:58.724414 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da681f8-0bc1-49c1-b1ae-82ec13f671e1" path="/var/lib/kubelet/pods/0da681f8-0bc1-49c1-b1ae-82ec13f671e1/volumes" Feb 16 21:24:07 crc kubenswrapper[4811]: I0216 21:24:07.704987 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:24:07 crc kubenswrapper[4811]: E0216 21:24:07.705639 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:24:09 crc kubenswrapper[4811]: I0216 21:24:09.071512 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-njvjn"] Feb 16 21:24:09 crc kubenswrapper[4811]: I0216 21:24:09.092415 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-njvjn"] Feb 16 21:24:09 crc kubenswrapper[4811]: E0216 21:24:09.705975 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:24:10 crc kubenswrapper[4811]: I0216 21:24:10.044446 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qv84d"] Feb 16 21:24:10 crc kubenswrapper[4811]: I0216 21:24:10.060336 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qv84d"] Feb 16 21:24:10 crc kubenswrapper[4811]: I0216 21:24:10.723761 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a07ef56-cd30-4652-9fdd-65279e9b5fb5" path="/var/lib/kubelet/pods/6a07ef56-cd30-4652-9fdd-65279e9b5fb5/volumes" Feb 16 21:24:10 crc kubenswrapper[4811]: I0216 21:24:10.724833 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a1f359-cb47-470b-ad6e-48d11efacfce" path="/var/lib/kubelet/pods/89a1f359-cb47-470b-ad6e-48d11efacfce/volumes" Feb 16 21:24:19 crc kubenswrapper[4811]: I0216 21:24:19.702650 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:24:19 crc kubenswrapper[4811]: E0216 21:24:19.703890 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:24:24 crc kubenswrapper[4811]: E0216 21:24:24.839459 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:24:24 crc kubenswrapper[4811]: E0216 21:24:24.840116 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:24:24 crc kubenswrapper[4811]: E0216 21:24:24.840348 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:24:24 crc kubenswrapper[4811]: E0216 21:24:24.841867 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:24:34 crc kubenswrapper[4811]: I0216 21:24:34.703349 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:24:34 crc kubenswrapper[4811]: E0216 21:24:34.704481 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:24:39 crc kubenswrapper[4811]: E0216 21:24:39.706534 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.058832 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6hccd"] Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.075035 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-gn59h"] Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.089809 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6hccd"] Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.102459 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-gn59h"] Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.115945 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-278vh"] Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.127353 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-278vh"] Feb 16 21:24:47 crc kubenswrapper[4811]: I0216 21:24:47.703786 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:24:47 crc kubenswrapper[4811]: E0216 21:24:47.704152 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:24:48 crc kubenswrapper[4811]: I0216 21:24:48.719546 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="155cab82-ef10-4ce4-8116-f3f80558987d" path="/var/lib/kubelet/pods/155cab82-ef10-4ce4-8116-f3f80558987d/volumes" Feb 16 21:24:48 crc kubenswrapper[4811]: I0216 21:24:48.720311 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35741874-d08a-4633-8b5f-438b7a3f6d12" path="/var/lib/kubelet/pods/35741874-d08a-4633-8b5f-438b7a3f6d12/volumes" Feb 16 21:24:48 crc kubenswrapper[4811]: I0216 21:24:48.721014 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b96249-bc72-45e5-9d7e-481deb69113b" path="/var/lib/kubelet/pods/54b96249-bc72-45e5-9d7e-481deb69113b/volumes" Feb 16 21:24:49 crc kubenswrapper[4811]: I0216 21:24:49.031690 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-da63-account-create-update-cxjcz"] Feb 16 21:24:49 crc kubenswrapper[4811]: I0216 21:24:49.048355 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1049-account-create-update-w7jrz"] Feb 16 21:24:49 crc kubenswrapper[4811]: I0216 21:24:49.056774 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-da63-account-create-update-cxjcz"] Feb 16 21:24:49 crc kubenswrapper[4811]: I0216 21:24:49.064661 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4bec-account-create-update-zg28l"] Feb 16 21:24:49 crc kubenswrapper[4811]: I0216 21:24:49.071553 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1049-account-create-update-w7jrz"] Feb 16 21:24:49 crc kubenswrapper[4811]: I0216 21:24:49.078020 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4bec-account-create-update-zg28l"] Feb 16 21:24:50 crc kubenswrapper[4811]: I0216 21:24:50.719835 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b583703-c772-4c1c-895d-6c410f34c439" path="/var/lib/kubelet/pods/0b583703-c772-4c1c-895d-6c410f34c439/volumes" Feb 16 21:24:50 crc kubenswrapper[4811]: I0216 21:24:50.721550 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ab0943-b07e-4c02-89df-e4768d30f129" path="/var/lib/kubelet/pods/41ab0943-b07e-4c02-89df-e4768d30f129/volumes" Feb 16 21:24:50 crc kubenswrapper[4811]: I0216 21:24:50.722719 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81372a4d-8d39-4692-8c7f-ed243fcf3822" path="/var/lib/kubelet/pods/81372a4d-8d39-4692-8c7f-ed243fcf3822/volumes" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.561292 4811 scope.go:117] "RemoveContainer" containerID="bccd5a55b686da93c11d141cf741a0cc651c398511c01c45b590ed41ad9def97" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.604225 4811 scope.go:117] "RemoveContainer" containerID="01a36ee76f1df2142cbc58fc40848184fafa36cdf8a55e27653abe852043214b" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.696973 4811 scope.go:117] "RemoveContainer" containerID="8fccd4c81ddc7e2103b2629665ee09572db2bf909e3ea0b3308d182738847222" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.771653 4811 scope.go:117] "RemoveContainer" containerID="3da90750b694462c35ced5df17c872be656b88d7528113bdda19eb659240aff1" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.796489 4811 scope.go:117] "RemoveContainer" containerID="dced4156982ac92d8984986e268eb973bbacaa7f38e3e7131cd4194d05f8fa99" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.843461 4811 scope.go:117] "RemoveContainer" containerID="7565c81ff33c4719e2fc870db567b4d6718efa65497d5dd589245dd50f84bf92" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.901356 4811 scope.go:117] "RemoveContainer" containerID="ef53e4e24d7f3df32bca85abb72d8ba1aaa1129c3399026269d17c671af18e3f" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.927706 4811 scope.go:117] "RemoveContainer" containerID="6cfb73a10f65b939a340f75eb824896b2811af43eb83dea282cf7d468d013a71" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.949346 4811 scope.go:117] "RemoveContainer" containerID="2a3b650f2567dc451027a9ee4e884c29086a6c198341b0ff1aad8cab2b7a538c" Feb 16 21:24:52 crc kubenswrapper[4811]: I0216 21:24:52.981637 4811 scope.go:117] "RemoveContainer" containerID="ca65d8083953c4efb197939751bda8fc23493af4ad3eb1a4a58ee78b70c9c7f2" Feb 16 21:24:53 crc kubenswrapper[4811]: I0216 21:24:53.003981 4811 scope.go:117] "RemoveContainer" containerID="d2f95e4d2897473b77afcffbc43b8d5891a29386bd997ab8c4ab099c55f8191b" Feb 16 21:24:54 crc kubenswrapper[4811]: E0216 21:24:54.706439 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:25:02 crc kubenswrapper[4811]: I0216 21:25:02.711849 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:25:02 crc kubenswrapper[4811]: E0216 21:25:02.713281 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:25:06 crc kubenswrapper[4811]: E0216 21:25:06.706508 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:25:16 crc kubenswrapper[4811]: I0216 21:25:16.067839 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2gwq"] Feb 16 21:25:16 crc kubenswrapper[4811]: I0216 21:25:16.077725 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2gwq"] Feb 16 21:25:16 crc kubenswrapper[4811]: I0216 21:25:16.714276 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a532407-8a9b-4764-ac7b-d4af3c9e53e5" path="/var/lib/kubelet/pods/9a532407-8a9b-4764-ac7b-d4af3c9e53e5/volumes" Feb 16 21:25:17 crc kubenswrapper[4811]: I0216 21:25:17.706031 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:25:17 crc kubenswrapper[4811]: E0216 21:25:17.706483 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:25:17 crc kubenswrapper[4811]: E0216 21:25:17.708744 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:25:30 crc kubenswrapper[4811]: I0216 21:25:30.704291 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:25:30 crc kubenswrapper[4811]: E0216 21:25:30.705409 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:25:31 crc kubenswrapper[4811]: E0216 21:25:31.704609 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:25:34 crc kubenswrapper[4811]: I0216 21:25:34.040799 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hcp7j"] Feb 16 21:25:34 crc kubenswrapper[4811]: I0216 21:25:34.051217 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hcp7j"] Feb 16 21:25:34 crc kubenswrapper[4811]: I0216 21:25:34.716014 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be2a7d3-2469-41ce-a8d4-baf6e58aece5" path="/var/lib/kubelet/pods/2be2a7d3-2469-41ce-a8d4-baf6e58aece5/volumes" Feb 16 21:25:35 crc kubenswrapper[4811]: I0216 21:25:35.045279 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-d8l76"] Feb 16 21:25:35 crc kubenswrapper[4811]: I0216 21:25:35.059328 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-d8l76"] Feb 16 21:25:36 crc kubenswrapper[4811]: I0216 21:25:36.713730 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8793e091-1ee8-417a-86fa-0c22af64bde3" path="/var/lib/kubelet/pods/8793e091-1ee8-417a-86fa-0c22af64bde3/volumes" Feb 16 21:25:44 crc kubenswrapper[4811]: E0216 21:25:44.706688 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:25:45 crc kubenswrapper[4811]: I0216 21:25:45.703521 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:25:45 crc kubenswrapper[4811]: E0216 21:25:45.704044 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:25:53 crc kubenswrapper[4811]: I0216 21:25:53.266121 4811 scope.go:117] "RemoveContainer" containerID="ba149d7a1af50bcfd429e917e6c06672d3954677d270782eb4fecae0377ca675" Feb 16 21:25:53 crc kubenswrapper[4811]: I0216 21:25:53.320210 4811 scope.go:117] "RemoveContainer" containerID="9b0c748e1acb21938335555bd06b6e93d705d12d26601030596ef94865519b33" Feb 16 21:25:53 crc kubenswrapper[4811]: I0216 21:25:53.370640 4811 scope.go:117] "RemoveContainer" containerID="6a42419938614270f85e664c5279f3464f9ac631067acf322dbecc09fd515997" Feb 16 21:25:56 crc kubenswrapper[4811]: E0216 21:25:56.706085 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:25:59 crc kubenswrapper[4811]: I0216 21:25:59.703820 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:25:59 crc kubenswrapper[4811]: E0216 21:25:59.705693 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:26:07 crc kubenswrapper[4811]: E0216 21:26:07.706397 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:26:13 crc kubenswrapper[4811]: I0216 21:26:13.702898 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:26:13 crc kubenswrapper[4811]: E0216 21:26:13.703779 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:26:19 crc kubenswrapper[4811]: I0216 21:26:19.073364 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5kbjt"] Feb 16 21:26:19 crc kubenswrapper[4811]: I0216 21:26:19.084158 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5kbjt"] Feb 16 21:26:20 crc kubenswrapper[4811]: I0216 21:26:20.714144 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3abe05-aab5-400d-b325-d94a0916d6a9" path="/var/lib/kubelet/pods/ea3abe05-aab5-400d-b325-d94a0916d6a9/volumes" Feb 16 21:26:21 crc kubenswrapper[4811]: E0216 21:26:21.705954 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:26:28 crc kubenswrapper[4811]: I0216 21:26:28.703498 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:26:28 crc kubenswrapper[4811]: E0216 21:26:28.704582 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:26:35 crc kubenswrapper[4811]: E0216 21:26:35.705211 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:26:43 crc kubenswrapper[4811]: I0216 21:26:43.703303 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:26:43 crc kubenswrapper[4811]: E0216 21:26:43.704644 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:26:47 crc kubenswrapper[4811]: E0216 21:26:47.713540 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:26:53 crc kubenswrapper[4811]: I0216 21:26:53.488155 4811 scope.go:117] "RemoveContainer" containerID="e955d5a60f1aa612e7d82ab4aa271199708308ddd656ae1ca80a406e41061c7a" Feb 16 21:26:54 crc kubenswrapper[4811]: I0216 21:26:54.703638 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:26:55 crc kubenswrapper[4811]: I0216 21:26:55.016469 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"76c556c711513a9cb0790f80311289c079022edb69fd8537889f8c774cf64f83"} Feb 16 21:27:00 crc kubenswrapper[4811]: E0216 21:27:00.706219 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:27:12 crc kubenswrapper[4811]: E0216 21:27:12.713627 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:27:24 crc kubenswrapper[4811]: E0216 21:27:24.705870 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:27:36 crc kubenswrapper[4811]: E0216 21:27:36.705263 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:27:47 crc kubenswrapper[4811]: E0216 21:27:47.706438 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:27:58 crc kubenswrapper[4811]: E0216 21:27:58.706169 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:28:12 crc kubenswrapper[4811]: E0216 21:28:12.711831 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:28:23 crc kubenswrapper[4811]: E0216 21:28:23.707596 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:28:38 crc kubenswrapper[4811]: E0216 21:28:38.707189 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:28:49 crc kubenswrapper[4811]: E0216 21:28:49.707120 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:29:00 crc kubenswrapper[4811]: E0216 21:29:00.706864 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:29:11 crc kubenswrapper[4811]: E0216 21:29:11.706518 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:29:18 crc kubenswrapper[4811]: I0216 21:29:18.364131 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:29:18 crc kubenswrapper[4811]: I0216 21:29:18.364939 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:29:25 crc kubenswrapper[4811]: I0216 21:29:25.705685 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:29:25 crc kubenswrapper[4811]: E0216 21:29:25.827328 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:29:25 crc kubenswrapper[4811]: E0216 21:29:25.827388 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:29:25 crc kubenswrapper[4811]: E0216 21:29:25.827533 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:29:25 crc kubenswrapper[4811]: E0216 21:29:25.829059 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:29:39 crc kubenswrapper[4811]: E0216 21:29:39.706036 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:29:48 crc kubenswrapper[4811]: I0216 21:29:48.363711 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:29:48 crc kubenswrapper[4811]: I0216 21:29:48.364424 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:29:50 crc kubenswrapper[4811]: E0216 21:29:50.704909 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.150215 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2"] Feb 16 21:30:00 crc kubenswrapper[4811]: E0216 21:30:00.151296 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="extract-utilities" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.151317 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="extract-utilities" Feb 16 21:30:00 crc kubenswrapper[4811]: E0216 21:30:00.151349 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="registry-server" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.151357 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="registry-server" Feb 16 21:30:00 crc kubenswrapper[4811]: E0216 21:30:00.151373 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="extract-content" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.151381 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="extract-content" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.151625 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="153361c7-730c-4a48-b920-e0596d43fe17" containerName="registry-server" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.152601 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.155807 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.156046 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.187656 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2"] Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.251452 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7whdf\" (UniqueName: \"kubernetes.io/projected/252e711c-7bbc-4ef1-8be2-13e34b479fa7-kube-api-access-7whdf\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.251833 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/252e711c-7bbc-4ef1-8be2-13e34b479fa7-secret-volume\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.252143 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252e711c-7bbc-4ef1-8be2-13e34b479fa7-config-volume\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.353643 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/252e711c-7bbc-4ef1-8be2-13e34b479fa7-secret-volume\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.353857 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252e711c-7bbc-4ef1-8be2-13e34b479fa7-config-volume\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.353907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7whdf\" (UniqueName: \"kubernetes.io/projected/252e711c-7bbc-4ef1-8be2-13e34b479fa7-kube-api-access-7whdf\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.355566 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252e711c-7bbc-4ef1-8be2-13e34b479fa7-config-volume\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.360345 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/252e711c-7bbc-4ef1-8be2-13e34b479fa7-secret-volume\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.373705 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7whdf\" (UniqueName: \"kubernetes.io/projected/252e711c-7bbc-4ef1-8be2-13e34b479fa7-kube-api-access-7whdf\") pod \"collect-profiles-29521290-82zf2\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:00 crc kubenswrapper[4811]: I0216 21:30:00.500173 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:01 crc kubenswrapper[4811]: I0216 21:30:01.009920 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2"] Feb 16 21:30:01 crc kubenswrapper[4811]: I0216 21:30:01.102009 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" event={"ID":"252e711c-7bbc-4ef1-8be2-13e34b479fa7","Type":"ContainerStarted","Data":"2857bb5e413e59d8f79a3386a9169293d6e799093681e2528c342441021fc693"} Feb 16 21:30:02 crc kubenswrapper[4811]: I0216 21:30:02.113876 4811 generic.go:334] "Generic (PLEG): container finished" podID="252e711c-7bbc-4ef1-8be2-13e34b479fa7" containerID="427936c9e9a4b6ab62e92b5057bfc4f6d7acf6f85a985a57fc4348319b9573c3" exitCode=0 Feb 16 21:30:02 crc kubenswrapper[4811]: I0216 21:30:02.113961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" event={"ID":"252e711c-7bbc-4ef1-8be2-13e34b479fa7","Type":"ContainerDied","Data":"427936c9e9a4b6ab62e92b5057bfc4f6d7acf6f85a985a57fc4348319b9573c3"} Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.587577 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.624856 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7whdf\" (UniqueName: \"kubernetes.io/projected/252e711c-7bbc-4ef1-8be2-13e34b479fa7-kube-api-access-7whdf\") pod \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.625024 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/252e711c-7bbc-4ef1-8be2-13e34b479fa7-secret-volume\") pod \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.625245 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252e711c-7bbc-4ef1-8be2-13e34b479fa7-config-volume\") pod \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\" (UID: \"252e711c-7bbc-4ef1-8be2-13e34b479fa7\") " Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.626290 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/252e711c-7bbc-4ef1-8be2-13e34b479fa7-config-volume" (OuterVolumeSpecName: "config-volume") pod "252e711c-7bbc-4ef1-8be2-13e34b479fa7" (UID: "252e711c-7bbc-4ef1-8be2-13e34b479fa7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.634412 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/252e711c-7bbc-4ef1-8be2-13e34b479fa7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "252e711c-7bbc-4ef1-8be2-13e34b479fa7" (UID: "252e711c-7bbc-4ef1-8be2-13e34b479fa7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.635251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/252e711c-7bbc-4ef1-8be2-13e34b479fa7-kube-api-access-7whdf" (OuterVolumeSpecName: "kube-api-access-7whdf") pod "252e711c-7bbc-4ef1-8be2-13e34b479fa7" (UID: "252e711c-7bbc-4ef1-8be2-13e34b479fa7"). InnerVolumeSpecName "kube-api-access-7whdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:30:03 crc kubenswrapper[4811]: E0216 21:30:03.708716 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.728954 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7whdf\" (UniqueName: \"kubernetes.io/projected/252e711c-7bbc-4ef1-8be2-13e34b479fa7-kube-api-access-7whdf\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.729019 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/252e711c-7bbc-4ef1-8be2-13e34b479fa7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:03 crc kubenswrapper[4811]: I0216 21:30:03.729037 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252e711c-7bbc-4ef1-8be2-13e34b479fa7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:04 crc kubenswrapper[4811]: I0216 21:30:04.169018 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" event={"ID":"252e711c-7bbc-4ef1-8be2-13e34b479fa7","Type":"ContainerDied","Data":"2857bb5e413e59d8f79a3386a9169293d6e799093681e2528c342441021fc693"} Feb 16 21:30:04 crc kubenswrapper[4811]: I0216 21:30:04.169091 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2857bb5e413e59d8f79a3386a9169293d6e799093681e2528c342441021fc693" Feb 16 21:30:04 crc kubenswrapper[4811]: I0216 21:30:04.169115 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-82zf2" Feb 16 21:30:04 crc kubenswrapper[4811]: I0216 21:30:04.696641 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs"] Feb 16 21:30:04 crc kubenswrapper[4811]: I0216 21:30:04.717319 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-9xzcs"] Feb 16 21:30:06 crc kubenswrapper[4811]: I0216 21:30:06.724279 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9195c217-c5bc-4625-9b9c-2aa209485e3c" path="/var/lib/kubelet/pods/9195c217-c5bc-4625-9b9c-2aa209485e3c/volumes" Feb 16 21:30:16 crc kubenswrapper[4811]: E0216 21:30:16.705555 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.364324 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.365315 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.365367 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.366114 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76c556c711513a9cb0790f80311289c079022edb69fd8537889f8c774cf64f83"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.366168 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://76c556c711513a9cb0790f80311289c079022edb69fd8537889f8c774cf64f83" gracePeriod=600 Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.697232 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="76c556c711513a9cb0790f80311289c079022edb69fd8537889f8c774cf64f83" exitCode=0 Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.697236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"76c556c711513a9cb0790f80311289c079022edb69fd8537889f8c774cf64f83"} Feb 16 21:30:18 crc kubenswrapper[4811]: I0216 21:30:18.697516 4811 scope.go:117] "RemoveContainer" containerID="89ca6938d321c0a0bf12f1f6d28aeccc9978b337cd744466259a9e0d8b03a7cb" Feb 16 21:30:19 crc kubenswrapper[4811]: I0216 21:30:19.709791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d"} Feb 16 21:30:29 crc kubenswrapper[4811]: E0216 21:30:29.706440 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:30:40 crc kubenswrapper[4811]: E0216 21:30:40.705376 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:30:51 crc kubenswrapper[4811]: E0216 21:30:51.707138 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:30:53 crc kubenswrapper[4811]: I0216 21:30:53.642692 4811 scope.go:117] "RemoveContainer" containerID="32273f40296cb069a1d2afbb7235ee062b126f01eea8680d182bc4166a644b08" Feb 16 21:31:03 crc kubenswrapper[4811]: E0216 21:31:03.706999 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:31:14 crc kubenswrapper[4811]: E0216 21:31:14.708487 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:31:26 crc kubenswrapper[4811]: E0216 21:31:26.705141 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:31:37 crc kubenswrapper[4811]: E0216 21:31:37.704811 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:31:51 crc kubenswrapper[4811]: E0216 21:31:51.704712 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.272286 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzng7"] Feb 16 21:31:57 crc kubenswrapper[4811]: E0216 21:31:57.284011 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="252e711c-7bbc-4ef1-8be2-13e34b479fa7" containerName="collect-profiles" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.284067 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="252e711c-7bbc-4ef1-8be2-13e34b479fa7" containerName="collect-profiles" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.284860 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="252e711c-7bbc-4ef1-8be2-13e34b479fa7" containerName="collect-profiles" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.289185 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.302993 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzng7"] Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.414239 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-catalog-content\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.414310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9sv\" (UniqueName: \"kubernetes.io/projected/cf57c330-711d-4612-9a73-edda47f69403-kube-api-access-mj9sv\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.414646 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-utilities\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.516909 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-utilities\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.517085 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-catalog-content\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.517133 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9sv\" (UniqueName: \"kubernetes.io/projected/cf57c330-711d-4612-9a73-edda47f69403-kube-api-access-mj9sv\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.517440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-utilities\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.517686 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-catalog-content\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.579808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9sv\" (UniqueName: \"kubernetes.io/projected/cf57c330-711d-4612-9a73-edda47f69403-kube-api-access-mj9sv\") pod \"redhat-marketplace-kzng7\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:57 crc kubenswrapper[4811]: I0216 21:31:57.627098 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:31:58 crc kubenswrapper[4811]: W0216 21:31:58.099732 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf57c330_711d_4612_9a73_edda47f69403.slice/crio-fcf9941d83abf8dd245a2b492efe379b9050f2fcb97e21e01713da30ef959938 WatchSource:0}: Error finding container fcf9941d83abf8dd245a2b492efe379b9050f2fcb97e21e01713da30ef959938: Status 404 returned error can't find the container with id fcf9941d83abf8dd245a2b492efe379b9050f2fcb97e21e01713da30ef959938 Feb 16 21:31:58 crc kubenswrapper[4811]: I0216 21:31:58.101643 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzng7"] Feb 16 21:31:58 crc kubenswrapper[4811]: I0216 21:31:58.833291 4811 generic.go:334] "Generic (PLEG): container finished" podID="cf57c330-711d-4612-9a73-edda47f69403" containerID="f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7" exitCode=0 Feb 16 21:31:58 crc kubenswrapper[4811]: I0216 21:31:58.833350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerDied","Data":"f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7"} Feb 16 21:31:58 crc kubenswrapper[4811]: I0216 21:31:58.833381 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerStarted","Data":"fcf9941d83abf8dd245a2b492efe379b9050f2fcb97e21e01713da30ef959938"} Feb 16 21:31:59 crc kubenswrapper[4811]: I0216 21:31:59.855351 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerStarted","Data":"84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628"} Feb 16 21:32:00 crc kubenswrapper[4811]: I0216 21:32:00.866741 4811 generic.go:334] "Generic (PLEG): container finished" podID="cf57c330-711d-4612-9a73-edda47f69403" containerID="84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628" exitCode=0 Feb 16 21:32:00 crc kubenswrapper[4811]: I0216 21:32:00.866782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerDied","Data":"84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628"} Feb 16 21:32:01 crc kubenswrapper[4811]: I0216 21:32:01.877788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerStarted","Data":"4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa"} Feb 16 21:32:01 crc kubenswrapper[4811]: I0216 21:32:01.913009 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzng7" podStartSLOduration=2.498217684 podStartE2EDuration="4.912979851s" podCreationTimestamp="2026-02-16 21:31:57 +0000 UTC" firstStartedPulling="2026-02-16 21:31:58.835875906 +0000 UTC m=+2136.765171844" lastFinishedPulling="2026-02-16 21:32:01.250638053 +0000 UTC m=+2139.179934011" observedRunningTime="2026-02-16 21:32:01.902170627 +0000 UTC m=+2139.831466595" watchObservedRunningTime="2026-02-16 21:32:01.912979851 +0000 UTC m=+2139.842275829" Feb 16 21:32:04 crc kubenswrapper[4811]: E0216 21:32:04.704895 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:32:07 crc kubenswrapper[4811]: I0216 21:32:07.627172 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:32:07 crc kubenswrapper[4811]: I0216 21:32:07.627716 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:32:07 crc kubenswrapper[4811]: I0216 21:32:07.700157 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:32:07 crc kubenswrapper[4811]: I0216 21:32:07.990365 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:32:08 crc kubenswrapper[4811]: I0216 21:32:08.045235 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzng7"] Feb 16 21:32:09 crc kubenswrapper[4811]: I0216 21:32:09.952667 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzng7" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="registry-server" containerID="cri-o://4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa" gracePeriod=2 Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.530740 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.609372 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-utilities\") pod \"cf57c330-711d-4612-9a73-edda47f69403\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.609461 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-catalog-content\") pod \"cf57c330-711d-4612-9a73-edda47f69403\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.609548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj9sv\" (UniqueName: \"kubernetes.io/projected/cf57c330-711d-4612-9a73-edda47f69403-kube-api-access-mj9sv\") pod \"cf57c330-711d-4612-9a73-edda47f69403\" (UID: \"cf57c330-711d-4612-9a73-edda47f69403\") " Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.610538 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-utilities" (OuterVolumeSpecName: "utilities") pod "cf57c330-711d-4612-9a73-edda47f69403" (UID: "cf57c330-711d-4612-9a73-edda47f69403"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.634266 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf57c330-711d-4612-9a73-edda47f69403" (UID: "cf57c330-711d-4612-9a73-edda47f69403"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.712157 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.712450 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf57c330-711d-4612-9a73-edda47f69403-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.965221 4811 generic.go:334] "Generic (PLEG): container finished" podID="cf57c330-711d-4612-9a73-edda47f69403" containerID="4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa" exitCode=0 Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.965274 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerDied","Data":"4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa"} Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.965304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzng7" event={"ID":"cf57c330-711d-4612-9a73-edda47f69403","Type":"ContainerDied","Data":"fcf9941d83abf8dd245a2b492efe379b9050f2fcb97e21e01713da30ef959938"} Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.965325 4811 scope.go:117] "RemoveContainer" containerID="4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.965491 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzng7" Feb 16 21:32:10 crc kubenswrapper[4811]: I0216 21:32:10.996880 4811 scope.go:117] "RemoveContainer" containerID="84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.115296 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf57c330-711d-4612-9a73-edda47f69403-kube-api-access-mj9sv" (OuterVolumeSpecName: "kube-api-access-mj9sv") pod "cf57c330-711d-4612-9a73-edda47f69403" (UID: "cf57c330-711d-4612-9a73-edda47f69403"). InnerVolumeSpecName "kube-api-access-mj9sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.120643 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj9sv\" (UniqueName: \"kubernetes.io/projected/cf57c330-711d-4612-9a73-edda47f69403-kube-api-access-mj9sv\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.133288 4811 scope.go:117] "RemoveContainer" containerID="f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.161104 4811 scope.go:117] "RemoveContainer" containerID="4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa" Feb 16 21:32:11 crc kubenswrapper[4811]: E0216 21:32:11.161678 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa\": container with ID starting with 4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa not found: ID does not exist" containerID="4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.161758 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa"} err="failed to get container status \"4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa\": rpc error: code = NotFound desc = could not find container \"4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa\": container with ID starting with 4775f854ef47a78a1a170a8c81f12eaaeb1e3eb70dd5db515cae5c0c94ad6aaa not found: ID does not exist" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.161778 4811 scope.go:117] "RemoveContainer" containerID="84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628" Feb 16 21:32:11 crc kubenswrapper[4811]: E0216 21:32:11.162454 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628\": container with ID starting with 84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628 not found: ID does not exist" containerID="84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.162479 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628"} err="failed to get container status \"84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628\": rpc error: code = NotFound desc = could not find container \"84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628\": container with ID starting with 84e31ee5793553e126f2c043be812a719cda8621e38d129cd3ee31040b6a3628 not found: ID does not exist" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.162494 4811 scope.go:117] "RemoveContainer" containerID="f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7" Feb 16 21:32:11 crc kubenswrapper[4811]: E0216 21:32:11.162837 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7\": container with ID starting with f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7 not found: ID does not exist" containerID="f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.162894 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7"} err="failed to get container status \"f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7\": rpc error: code = NotFound desc = could not find container \"f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7\": container with ID starting with f71c34fde004e49621e9a6c60ca00b9d68158c71f2020c8849fdecb4bcb24bd7 not found: ID does not exist" Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.299231 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzng7"] Feb 16 21:32:11 crc kubenswrapper[4811]: I0216 21:32:11.309405 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzng7"] Feb 16 21:32:12 crc kubenswrapper[4811]: I0216 21:32:12.730625 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf57c330-711d-4612-9a73-edda47f69403" path="/var/lib/kubelet/pods/cf57c330-711d-4612-9a73-edda47f69403/volumes" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.750838 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vk492"] Feb 16 21:32:13 crc kubenswrapper[4811]: E0216 21:32:13.751278 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="registry-server" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.751290 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="registry-server" Feb 16 21:32:13 crc kubenswrapper[4811]: E0216 21:32:13.751326 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="extract-utilities" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.751333 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="extract-utilities" Feb 16 21:32:13 crc kubenswrapper[4811]: E0216 21:32:13.751353 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="extract-content" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.751361 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="extract-content" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.751534 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf57c330-711d-4612-9a73-edda47f69403" containerName="registry-server" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.753060 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.776443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-utilities\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.776496 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk492"] Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.776532 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4djk\" (UniqueName: \"kubernetes.io/projected/767dbea2-eae8-4d46-8064-6eb106644494-kube-api-access-n4djk\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.776636 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-catalog-content\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.878705 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-utilities\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.878805 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4djk\" (UniqueName: \"kubernetes.io/projected/767dbea2-eae8-4d46-8064-6eb106644494-kube-api-access-n4djk\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.878902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-catalog-content\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.879113 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-utilities\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.879162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-catalog-content\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:13 crc kubenswrapper[4811]: I0216 21:32:13.912227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4djk\" (UniqueName: \"kubernetes.io/projected/767dbea2-eae8-4d46-8064-6eb106644494-kube-api-access-n4djk\") pod \"redhat-operators-vk492\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:14 crc kubenswrapper[4811]: I0216 21:32:14.100788 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:14 crc kubenswrapper[4811]: I0216 21:32:14.540233 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk492"] Feb 16 21:32:15 crc kubenswrapper[4811]: I0216 21:32:15.006736 4811 generic.go:334] "Generic (PLEG): container finished" podID="767dbea2-eae8-4d46-8064-6eb106644494" containerID="512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68" exitCode=0 Feb 16 21:32:15 crc kubenswrapper[4811]: I0216 21:32:15.006792 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerDied","Data":"512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68"} Feb 16 21:32:15 crc kubenswrapper[4811]: I0216 21:32:15.006828 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerStarted","Data":"37c90f44d69fc09283902d97b9906218534049a77c35e7dd3c1a28dc35d3db3b"} Feb 16 21:32:16 crc kubenswrapper[4811]: I0216 21:32:16.018212 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerStarted","Data":"58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1"} Feb 16 21:32:16 crc kubenswrapper[4811]: E0216 21:32:16.704535 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:32:18 crc kubenswrapper[4811]: I0216 21:32:18.038229 4811 generic.go:334] "Generic (PLEG): container finished" podID="767dbea2-eae8-4d46-8064-6eb106644494" containerID="58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1" exitCode=0 Feb 16 21:32:18 crc kubenswrapper[4811]: I0216 21:32:18.038289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerDied","Data":"58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1"} Feb 16 21:32:18 crc kubenswrapper[4811]: I0216 21:32:18.363741 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:32:18 crc kubenswrapper[4811]: I0216 21:32:18.364044 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:32:19 crc kubenswrapper[4811]: I0216 21:32:19.066092 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerStarted","Data":"a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002"} Feb 16 21:32:24 crc kubenswrapper[4811]: I0216 21:32:24.101313 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:24 crc kubenswrapper[4811]: I0216 21:32:24.103507 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:25 crc kubenswrapper[4811]: I0216 21:32:25.155651 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vk492" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="registry-server" probeResult="failure" output=< Feb 16 21:32:25 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 21:32:25 crc kubenswrapper[4811]: > Feb 16 21:32:28 crc kubenswrapper[4811]: E0216 21:32:28.707385 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:32:34 crc kubenswrapper[4811]: I0216 21:32:34.179402 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:34 crc kubenswrapper[4811]: I0216 21:32:34.208147 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vk492" podStartSLOduration=17.541942527 podStartE2EDuration="21.208128422s" podCreationTimestamp="2026-02-16 21:32:13 +0000 UTC" firstStartedPulling="2026-02-16 21:32:15.009155729 +0000 UTC m=+2152.938451667" lastFinishedPulling="2026-02-16 21:32:18.675341624 +0000 UTC m=+2156.604637562" observedRunningTime="2026-02-16 21:32:19.09782975 +0000 UTC m=+2157.027125688" watchObservedRunningTime="2026-02-16 21:32:34.208128422 +0000 UTC m=+2172.137424370" Feb 16 21:32:34 crc kubenswrapper[4811]: I0216 21:32:34.267347 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:34 crc kubenswrapper[4811]: I0216 21:32:34.425608 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk492"] Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.245003 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vk492" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="registry-server" containerID="cri-o://a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002" gracePeriod=2 Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.832122 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.898042 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-catalog-content\") pod \"767dbea2-eae8-4d46-8064-6eb106644494\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.898101 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4djk\" (UniqueName: \"kubernetes.io/projected/767dbea2-eae8-4d46-8064-6eb106644494-kube-api-access-n4djk\") pod \"767dbea2-eae8-4d46-8064-6eb106644494\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.898243 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-utilities\") pod \"767dbea2-eae8-4d46-8064-6eb106644494\" (UID: \"767dbea2-eae8-4d46-8064-6eb106644494\") " Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.899100 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-utilities" (OuterVolumeSpecName: "utilities") pod "767dbea2-eae8-4d46-8064-6eb106644494" (UID: "767dbea2-eae8-4d46-8064-6eb106644494"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:32:35 crc kubenswrapper[4811]: I0216 21:32:35.907560 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767dbea2-eae8-4d46-8064-6eb106644494-kube-api-access-n4djk" (OuterVolumeSpecName: "kube-api-access-n4djk") pod "767dbea2-eae8-4d46-8064-6eb106644494" (UID: "767dbea2-eae8-4d46-8064-6eb106644494"). InnerVolumeSpecName "kube-api-access-n4djk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.000471 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4djk\" (UniqueName: \"kubernetes.io/projected/767dbea2-eae8-4d46-8064-6eb106644494-kube-api-access-n4djk\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.000504 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.017451 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "767dbea2-eae8-4d46-8064-6eb106644494" (UID: "767dbea2-eae8-4d46-8064-6eb106644494"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.103058 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767dbea2-eae8-4d46-8064-6eb106644494-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.260489 4811 generic.go:334] "Generic (PLEG): container finished" podID="767dbea2-eae8-4d46-8064-6eb106644494" containerID="a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002" exitCode=0 Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.260540 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerDied","Data":"a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002"} Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.260573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk492" event={"ID":"767dbea2-eae8-4d46-8064-6eb106644494","Type":"ContainerDied","Data":"37c90f44d69fc09283902d97b9906218534049a77c35e7dd3c1a28dc35d3db3b"} Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.260595 4811 scope.go:117] "RemoveContainer" containerID="a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.262186 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk492" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.299628 4811 scope.go:117] "RemoveContainer" containerID="58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.304736 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk492"] Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.312897 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vk492"] Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.331764 4811 scope.go:117] "RemoveContainer" containerID="512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.388468 4811 scope.go:117] "RemoveContainer" containerID="a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002" Feb 16 21:32:36 crc kubenswrapper[4811]: E0216 21:32:36.389059 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002\": container with ID starting with a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002 not found: ID does not exist" containerID="a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.389123 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002"} err="failed to get container status \"a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002\": rpc error: code = NotFound desc = could not find container \"a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002\": container with ID starting with a81e9ee3a8d65c6ea3c928d4a3a38ef0b9ebbe3bc97a0af38925675c34c10002 not found: ID does not exist" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.389156 4811 scope.go:117] "RemoveContainer" containerID="58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1" Feb 16 21:32:36 crc kubenswrapper[4811]: E0216 21:32:36.389850 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1\": container with ID starting with 58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1 not found: ID does not exist" containerID="58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.389898 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1"} err="failed to get container status \"58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1\": rpc error: code = NotFound desc = could not find container \"58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1\": container with ID starting with 58b50204070475c459f1e577c58d80b4c3fa30b7623969a569b4ce9f301f04b1 not found: ID does not exist" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.389935 4811 scope.go:117] "RemoveContainer" containerID="512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68" Feb 16 21:32:36 crc kubenswrapper[4811]: E0216 21:32:36.390416 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68\": container with ID starting with 512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68 not found: ID does not exist" containerID="512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.390453 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68"} err="failed to get container status \"512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68\": rpc error: code = NotFound desc = could not find container \"512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68\": container with ID starting with 512f8be05d126f420a8026d62add5ad7a11d52ee32af140fa48a4d585e582f68 not found: ID does not exist" Feb 16 21:32:36 crc kubenswrapper[4811]: I0216 21:32:36.731351 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767dbea2-eae8-4d46-8064-6eb106644494" path="/var/lib/kubelet/pods/767dbea2-eae8-4d46-8064-6eb106644494/volumes" Feb 16 21:32:41 crc kubenswrapper[4811]: E0216 21:32:41.707744 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:32:48 crc kubenswrapper[4811]: I0216 21:32:48.364626 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:32:48 crc kubenswrapper[4811]: I0216 21:32:48.365381 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:32:52 crc kubenswrapper[4811]: E0216 21:32:52.714908 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:33:07 crc kubenswrapper[4811]: E0216 21:33:07.705537 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.363702 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.364313 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.364361 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.365104 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.365160 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" gracePeriod=600 Feb 16 21:33:18 crc kubenswrapper[4811]: E0216 21:33:18.491289 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.728932 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" exitCode=0 Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.731728 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d"} Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.731825 4811 scope.go:117] "RemoveContainer" containerID="76c556c711513a9cb0790f80311289c079022edb69fd8537889f8c774cf64f83" Feb 16 21:33:18 crc kubenswrapper[4811]: I0216 21:33:18.732950 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:33:18 crc kubenswrapper[4811]: E0216 21:33:18.733466 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:33:20 crc kubenswrapper[4811]: E0216 21:33:20.706096 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:33:29 crc kubenswrapper[4811]: I0216 21:33:29.703405 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:33:29 crc kubenswrapper[4811]: E0216 21:33:29.704417 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:33:33 crc kubenswrapper[4811]: E0216 21:33:33.704760 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:33:44 crc kubenswrapper[4811]: I0216 21:33:44.703054 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:33:44 crc kubenswrapper[4811]: E0216 21:33:44.704392 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:33:48 crc kubenswrapper[4811]: E0216 21:33:48.707559 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:33:59 crc kubenswrapper[4811]: I0216 21:33:59.703660 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:33:59 crc kubenswrapper[4811]: E0216 21:33:59.704689 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:34:01 crc kubenswrapper[4811]: E0216 21:34:01.705988 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:34:10 crc kubenswrapper[4811]: I0216 21:34:10.703627 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:34:10 crc kubenswrapper[4811]: E0216 21:34:10.704891 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:34:14 crc kubenswrapper[4811]: E0216 21:34:14.704872 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:34:21 crc kubenswrapper[4811]: I0216 21:34:21.703501 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:34:21 crc kubenswrapper[4811]: E0216 21:34:21.704688 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:34:29 crc kubenswrapper[4811]: I0216 21:34:29.707078 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:34:29 crc kubenswrapper[4811]: E0216 21:34:29.856219 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:34:29 crc kubenswrapper[4811]: E0216 21:34:29.856260 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:34:29 crc kubenswrapper[4811]: E0216 21:34:29.856575 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:34:29 crc kubenswrapper[4811]: E0216 21:34:29.857831 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.271050 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z9wcz"] Feb 16 21:34:30 crc kubenswrapper[4811]: E0216 21:34:30.272282 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="extract-content" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.272429 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="extract-content" Feb 16 21:34:30 crc kubenswrapper[4811]: E0216 21:34:30.272569 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="extract-utilities" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.272667 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="extract-utilities" Feb 16 21:34:30 crc kubenswrapper[4811]: E0216 21:34:30.272807 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="registry-server" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.272899 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="registry-server" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.273344 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="767dbea2-eae8-4d46-8064-6eb106644494" containerName="registry-server" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.294848 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.307701 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9wcz"] Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.357718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn85f\" (UniqueName: \"kubernetes.io/projected/ab5cf036-2a30-4350-a880-07990c100da6-kube-api-access-xn85f\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.357918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-utilities\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.357944 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-catalog-content\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.460667 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-utilities\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.461002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-catalog-content\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.461223 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn85f\" (UniqueName: \"kubernetes.io/projected/ab5cf036-2a30-4350-a880-07990c100da6-kube-api-access-xn85f\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.461261 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-utilities\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.461570 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-catalog-content\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.491567 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn85f\" (UniqueName: \"kubernetes.io/projected/ab5cf036-2a30-4350-a880-07990c100da6-kube-api-access-xn85f\") pod \"community-operators-z9wcz\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:30 crc kubenswrapper[4811]: I0216 21:34:30.637350 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:31 crc kubenswrapper[4811]: I0216 21:34:31.171865 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9wcz"] Feb 16 21:34:31 crc kubenswrapper[4811]: I0216 21:34:31.602811 4811 generic.go:334] "Generic (PLEG): container finished" podID="ab5cf036-2a30-4350-a880-07990c100da6" containerID="219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52" exitCode=0 Feb 16 21:34:31 crc kubenswrapper[4811]: I0216 21:34:31.603035 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerDied","Data":"219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52"} Feb 16 21:34:31 crc kubenswrapper[4811]: I0216 21:34:31.603155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerStarted","Data":"d634d371fe31a70531dbbc32e1e984a608ac948215b545b1b45f99bd3cb253ab"} Feb 16 21:34:32 crc kubenswrapper[4811]: I0216 21:34:32.613283 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerStarted","Data":"e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c"} Feb 16 21:34:33 crc kubenswrapper[4811]: I0216 21:34:33.622045 4811 generic.go:334] "Generic (PLEG): container finished" podID="ab5cf036-2a30-4350-a880-07990c100da6" containerID="e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c" exitCode=0 Feb 16 21:34:33 crc kubenswrapper[4811]: I0216 21:34:33.622084 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerDied","Data":"e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c"} Feb 16 21:34:34 crc kubenswrapper[4811]: I0216 21:34:34.635917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerStarted","Data":"e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a"} Feb 16 21:34:34 crc kubenswrapper[4811]: I0216 21:34:34.675354 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z9wcz" podStartSLOduration=2.271879842 podStartE2EDuration="4.67532589s" podCreationTimestamp="2026-02-16 21:34:30 +0000 UTC" firstStartedPulling="2026-02-16 21:34:31.604631951 +0000 UTC m=+2289.533927889" lastFinishedPulling="2026-02-16 21:34:34.008077999 +0000 UTC m=+2291.937373937" observedRunningTime="2026-02-16 21:34:34.65983278 +0000 UTC m=+2292.589128748" watchObservedRunningTime="2026-02-16 21:34:34.67532589 +0000 UTC m=+2292.604621848" Feb 16 21:34:35 crc kubenswrapper[4811]: I0216 21:34:35.703164 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:34:35 crc kubenswrapper[4811]: E0216 21:34:35.703534 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:34:40 crc kubenswrapper[4811]: I0216 21:34:40.637503 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:40 crc kubenswrapper[4811]: I0216 21:34:40.638128 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:40 crc kubenswrapper[4811]: I0216 21:34:40.730718 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:40 crc kubenswrapper[4811]: I0216 21:34:40.800541 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:40 crc kubenswrapper[4811]: I0216 21:34:40.987008 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9wcz"] Feb 16 21:34:42 crc kubenswrapper[4811]: I0216 21:34:42.728340 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z9wcz" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="registry-server" containerID="cri-o://e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a" gracePeriod=2 Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.383302 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.483970 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-catalog-content\") pod \"ab5cf036-2a30-4350-a880-07990c100da6\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.484064 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-utilities\") pod \"ab5cf036-2a30-4350-a880-07990c100da6\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.484321 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn85f\" (UniqueName: \"kubernetes.io/projected/ab5cf036-2a30-4350-a880-07990c100da6-kube-api-access-xn85f\") pod \"ab5cf036-2a30-4350-a880-07990c100da6\" (UID: \"ab5cf036-2a30-4350-a880-07990c100da6\") " Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.485329 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-utilities" (OuterVolumeSpecName: "utilities") pod "ab5cf036-2a30-4350-a880-07990c100da6" (UID: "ab5cf036-2a30-4350-a880-07990c100da6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.492262 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5cf036-2a30-4350-a880-07990c100da6-kube-api-access-xn85f" (OuterVolumeSpecName: "kube-api-access-xn85f") pod "ab5cf036-2a30-4350-a880-07990c100da6" (UID: "ab5cf036-2a30-4350-a880-07990c100da6"). InnerVolumeSpecName "kube-api-access-xn85f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.537657 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab5cf036-2a30-4350-a880-07990c100da6" (UID: "ab5cf036-2a30-4350-a880-07990c100da6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.586137 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn85f\" (UniqueName: \"kubernetes.io/projected/ab5cf036-2a30-4350-a880-07990c100da6-kube-api-access-xn85f\") on node \"crc\" DevicePath \"\"" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.586170 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.586180 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5cf036-2a30-4350-a880-07990c100da6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:34:43 crc kubenswrapper[4811]: E0216 21:34:43.704268 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.743336 4811 generic.go:334] "Generic (PLEG): container finished" podID="ab5cf036-2a30-4350-a880-07990c100da6" containerID="e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a" exitCode=0 Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.743785 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wcz" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.743780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerDied","Data":"e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a"} Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.744653 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wcz" event={"ID":"ab5cf036-2a30-4350-a880-07990c100da6","Type":"ContainerDied","Data":"d634d371fe31a70531dbbc32e1e984a608ac948215b545b1b45f99bd3cb253ab"} Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.744717 4811 scope.go:117] "RemoveContainer" containerID="e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.782702 4811 scope.go:117] "RemoveContainer" containerID="e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.787150 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9wcz"] Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.804233 4811 scope.go:117] "RemoveContainer" containerID="219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.804702 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z9wcz"] Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.846094 4811 scope.go:117] "RemoveContainer" containerID="e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a" Feb 16 21:34:43 crc kubenswrapper[4811]: E0216 21:34:43.846951 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a\": container with ID starting with e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a not found: ID does not exist" containerID="e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.847077 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a"} err="failed to get container status \"e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a\": rpc error: code = NotFound desc = could not find container \"e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a\": container with ID starting with e537439507e0ff0bfd4690a9ddf6f3a57ca7cd695d3e401b9d8f43243c1b947a not found: ID does not exist" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.847182 4811 scope.go:117] "RemoveContainer" containerID="e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c" Feb 16 21:34:43 crc kubenswrapper[4811]: E0216 21:34:43.847671 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c\": container with ID starting with e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c not found: ID does not exist" containerID="e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.847726 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c"} err="failed to get container status \"e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c\": rpc error: code = NotFound desc = could not find container \"e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c\": container with ID starting with e6e778bb88140a01e3976f71fc8c710b26ee291358d282390d5e5bc38cb8ae5c not found: ID does not exist" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.847765 4811 scope.go:117] "RemoveContainer" containerID="219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52" Feb 16 21:34:43 crc kubenswrapper[4811]: E0216 21:34:43.848175 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52\": container with ID starting with 219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52 not found: ID does not exist" containerID="219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52" Feb 16 21:34:43 crc kubenswrapper[4811]: I0216 21:34:43.848246 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52"} err="failed to get container status \"219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52\": rpc error: code = NotFound desc = could not find container \"219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52\": container with ID starting with 219f9aaa0ada1124addfb1b64c23a6f74973775703329a29e2dcdb8e5b858f52 not found: ID does not exist" Feb 16 21:34:44 crc kubenswrapper[4811]: I0216 21:34:44.715098 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5cf036-2a30-4350-a880-07990c100da6" path="/var/lib/kubelet/pods/ab5cf036-2a30-4350-a880-07990c100da6/volumes" Feb 16 21:34:48 crc kubenswrapper[4811]: I0216 21:34:48.706973 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:34:48 crc kubenswrapper[4811]: E0216 21:34:48.708828 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:34:58 crc kubenswrapper[4811]: E0216 21:34:58.705457 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:35:01 crc kubenswrapper[4811]: I0216 21:35:01.703296 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:35:01 crc kubenswrapper[4811]: E0216 21:35:01.703877 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:35:11 crc kubenswrapper[4811]: E0216 21:35:11.708621 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:35:16 crc kubenswrapper[4811]: I0216 21:35:16.704282 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:35:16 crc kubenswrapper[4811]: E0216 21:35:16.705686 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:35:22 crc kubenswrapper[4811]: E0216 21:35:22.713894 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:35:27 crc kubenswrapper[4811]: I0216 21:35:27.703687 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:35:27 crc kubenswrapper[4811]: E0216 21:35:27.705052 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:35:37 crc kubenswrapper[4811]: E0216 21:35:37.706939 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:35:38 crc kubenswrapper[4811]: I0216 21:35:38.702868 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:35:38 crc kubenswrapper[4811]: E0216 21:35:38.703537 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:35:49 crc kubenswrapper[4811]: I0216 21:35:49.703268 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:35:49 crc kubenswrapper[4811]: E0216 21:35:49.704673 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:35:50 crc kubenswrapper[4811]: E0216 21:35:50.706182 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:36:02 crc kubenswrapper[4811]: E0216 21:36:02.717570 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:36:03 crc kubenswrapper[4811]: I0216 21:36:03.703541 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:36:03 crc kubenswrapper[4811]: E0216 21:36:03.704172 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:36:15 crc kubenswrapper[4811]: E0216 21:36:15.704567 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:36:17 crc kubenswrapper[4811]: I0216 21:36:17.703307 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:36:17 crc kubenswrapper[4811]: E0216 21:36:17.704293 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:36:28 crc kubenswrapper[4811]: E0216 21:36:28.706128 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:36:32 crc kubenswrapper[4811]: I0216 21:36:32.714690 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:36:32 crc kubenswrapper[4811]: E0216 21:36:32.715640 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:36:39 crc kubenswrapper[4811]: E0216 21:36:39.706043 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:36:45 crc kubenswrapper[4811]: I0216 21:36:45.703261 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:36:45 crc kubenswrapper[4811]: E0216 21:36:45.704247 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:36:50 crc kubenswrapper[4811]: E0216 21:36:50.705659 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:37:00 crc kubenswrapper[4811]: I0216 21:37:00.703989 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:37:00 crc kubenswrapper[4811]: E0216 21:37:00.705159 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:37:03 crc kubenswrapper[4811]: E0216 21:37:03.706456 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:37:11 crc kubenswrapper[4811]: I0216 21:37:11.703671 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:37:11 crc kubenswrapper[4811]: E0216 21:37:11.704734 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:37:14 crc kubenswrapper[4811]: E0216 21:37:14.712817 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.541153 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tdzd8"] Feb 16 21:37:20 crc kubenswrapper[4811]: E0216 21:37:20.542680 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="extract-utilities" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.542709 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="extract-utilities" Feb 16 21:37:20 crc kubenswrapper[4811]: E0216 21:37:20.542761 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="extract-content" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.542774 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="extract-content" Feb 16 21:37:20 crc kubenswrapper[4811]: E0216 21:37:20.542798 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="registry-server" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.542814 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="registry-server" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.543369 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5cf036-2a30-4350-a880-07990c100da6" containerName="registry-server" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.546594 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.562399 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdzd8"] Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.569148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-utilities\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.569915 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-catalog-content\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.572116 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zgff\" (UniqueName: \"kubernetes.io/projected/b0b51a98-436b-4715-aa58-192fe6aa9309-kube-api-access-5zgff\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.674273 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-utilities\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.674325 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-catalog-content\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.674445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zgff\" (UniqueName: \"kubernetes.io/projected/b0b51a98-436b-4715-aa58-192fe6aa9309-kube-api-access-5zgff\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.674852 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-utilities\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.674936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-catalog-content\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.703398 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zgff\" (UniqueName: \"kubernetes.io/projected/b0b51a98-436b-4715-aa58-192fe6aa9309-kube-api-access-5zgff\") pod \"certified-operators-tdzd8\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:20 crc kubenswrapper[4811]: I0216 21:37:20.882602 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:21 crc kubenswrapper[4811]: I0216 21:37:21.434149 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdzd8"] Feb 16 21:37:21 crc kubenswrapper[4811]: I0216 21:37:21.983912 4811 generic.go:334] "Generic (PLEG): container finished" podID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerID="b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90" exitCode=0 Feb 16 21:37:21 crc kubenswrapper[4811]: I0216 21:37:21.983963 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerDied","Data":"b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90"} Feb 16 21:37:21 crc kubenswrapper[4811]: I0216 21:37:21.985361 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerStarted","Data":"25d37dd4b753dbfc4c57f6b51db8a55d9716251c49596e40ebb43060e98d6ba2"} Feb 16 21:37:24 crc kubenswrapper[4811]: I0216 21:37:24.008776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerStarted","Data":"cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007"} Feb 16 21:37:24 crc kubenswrapper[4811]: I0216 21:37:24.702773 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:37:24 crc kubenswrapper[4811]: E0216 21:37:24.703364 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:37:25 crc kubenswrapper[4811]: I0216 21:37:25.022592 4811 generic.go:334] "Generic (PLEG): container finished" podID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerID="cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007" exitCode=0 Feb 16 21:37:25 crc kubenswrapper[4811]: I0216 21:37:25.022727 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerDied","Data":"cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007"} Feb 16 21:37:26 crc kubenswrapper[4811]: I0216 21:37:26.039518 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerStarted","Data":"ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57"} Feb 16 21:37:26 crc kubenswrapper[4811]: I0216 21:37:26.077943 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tdzd8" podStartSLOduration=2.591407802 podStartE2EDuration="6.077911943s" podCreationTimestamp="2026-02-16 21:37:20 +0000 UTC" firstStartedPulling="2026-02-16 21:37:21.986121304 +0000 UTC m=+2459.915417242" lastFinishedPulling="2026-02-16 21:37:25.472625445 +0000 UTC m=+2463.401921383" observedRunningTime="2026-02-16 21:37:26.062455694 +0000 UTC m=+2463.991751652" watchObservedRunningTime="2026-02-16 21:37:26.077911943 +0000 UTC m=+2464.007207921" Feb 16 21:37:26 crc kubenswrapper[4811]: E0216 21:37:26.705882 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:37:30 crc kubenswrapper[4811]: I0216 21:37:30.883391 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:30 crc kubenswrapper[4811]: I0216 21:37:30.883922 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:30 crc kubenswrapper[4811]: I0216 21:37:30.974444 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:31 crc kubenswrapper[4811]: I0216 21:37:31.163156 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:31 crc kubenswrapper[4811]: I0216 21:37:31.225573 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdzd8"] Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.118709 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tdzd8" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="registry-server" containerID="cri-o://ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57" gracePeriod=2 Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.609480 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.793157 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-utilities\") pod \"b0b51a98-436b-4715-aa58-192fe6aa9309\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.793504 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zgff\" (UniqueName: \"kubernetes.io/projected/b0b51a98-436b-4715-aa58-192fe6aa9309-kube-api-access-5zgff\") pod \"b0b51a98-436b-4715-aa58-192fe6aa9309\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.793657 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-catalog-content\") pod \"b0b51a98-436b-4715-aa58-192fe6aa9309\" (UID: \"b0b51a98-436b-4715-aa58-192fe6aa9309\") " Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.795884 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-utilities" (OuterVolumeSpecName: "utilities") pod "b0b51a98-436b-4715-aa58-192fe6aa9309" (UID: "b0b51a98-436b-4715-aa58-192fe6aa9309"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.802448 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b51a98-436b-4715-aa58-192fe6aa9309-kube-api-access-5zgff" (OuterVolumeSpecName: "kube-api-access-5zgff") pod "b0b51a98-436b-4715-aa58-192fe6aa9309" (UID: "b0b51a98-436b-4715-aa58-192fe6aa9309"). InnerVolumeSpecName "kube-api-access-5zgff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.853992 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0b51a98-436b-4715-aa58-192fe6aa9309" (UID: "b0b51a98-436b-4715-aa58-192fe6aa9309"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.903720 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.903757 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b51a98-436b-4715-aa58-192fe6aa9309-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:37:33 crc kubenswrapper[4811]: I0216 21:37:33.903767 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zgff\" (UniqueName: \"kubernetes.io/projected/b0b51a98-436b-4715-aa58-192fe6aa9309-kube-api-access-5zgff\") on node \"crc\" DevicePath \"\"" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.134046 4811 generic.go:334] "Generic (PLEG): container finished" podID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerID="ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57" exitCode=0 Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.134097 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdzd8" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.134138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerDied","Data":"ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57"} Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.134225 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdzd8" event={"ID":"b0b51a98-436b-4715-aa58-192fe6aa9309","Type":"ContainerDied","Data":"25d37dd4b753dbfc4c57f6b51db8a55d9716251c49596e40ebb43060e98d6ba2"} Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.134256 4811 scope.go:117] "RemoveContainer" containerID="ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.171032 4811 scope.go:117] "RemoveContainer" containerID="cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.202422 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdzd8"] Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.214601 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tdzd8"] Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.227557 4811 scope.go:117] "RemoveContainer" containerID="b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.281163 4811 scope.go:117] "RemoveContainer" containerID="ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57" Feb 16 21:37:34 crc kubenswrapper[4811]: E0216 21:37:34.281582 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57\": container with ID starting with ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57 not found: ID does not exist" containerID="ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.281634 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57"} err="failed to get container status \"ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57\": rpc error: code = NotFound desc = could not find container \"ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57\": container with ID starting with ecbf2d0f5ff69717ca371018940ac4848a7a8c502eab712507c0b6ff09e36e57 not found: ID does not exist" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.281675 4811 scope.go:117] "RemoveContainer" containerID="cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007" Feb 16 21:37:34 crc kubenswrapper[4811]: E0216 21:37:34.282035 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007\": container with ID starting with cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007 not found: ID does not exist" containerID="cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.282074 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007"} err="failed to get container status \"cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007\": rpc error: code = NotFound desc = could not find container \"cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007\": container with ID starting with cc2e2f687d4b0481827222058507786df21b7a27c737cb4fa332999b81c9b007 not found: ID does not exist" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.282105 4811 scope.go:117] "RemoveContainer" containerID="b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90" Feb 16 21:37:34 crc kubenswrapper[4811]: E0216 21:37:34.282441 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90\": container with ID starting with b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90 not found: ID does not exist" containerID="b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.282475 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90"} err="failed to get container status \"b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90\": rpc error: code = NotFound desc = could not find container \"b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90\": container with ID starting with b26996cf1d5246641490b29b817c57a534aa61f268f321b84632b9c4fed24b90 not found: ID does not exist" Feb 16 21:37:34 crc kubenswrapper[4811]: I0216 21:37:34.714669 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" path="/var/lib/kubelet/pods/b0b51a98-436b-4715-aa58-192fe6aa9309/volumes" Feb 16 21:37:38 crc kubenswrapper[4811]: I0216 21:37:38.704764 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:37:38 crc kubenswrapper[4811]: E0216 21:37:38.708261 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:37:39 crc kubenswrapper[4811]: E0216 21:37:39.705442 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:37:51 crc kubenswrapper[4811]: E0216 21:37:51.705093 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:37:52 crc kubenswrapper[4811]: I0216 21:37:52.721688 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:37:52 crc kubenswrapper[4811]: E0216 21:37:52.722751 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:38:02 crc kubenswrapper[4811]: E0216 21:38:02.719115 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:38:05 crc kubenswrapper[4811]: I0216 21:38:05.703147 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:38:05 crc kubenswrapper[4811]: E0216 21:38:05.704446 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:38:16 crc kubenswrapper[4811]: I0216 21:38:16.702970 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:38:16 crc kubenswrapper[4811]: E0216 21:38:16.704361 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:38:17 crc kubenswrapper[4811]: E0216 21:38:17.709648 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:38:28 crc kubenswrapper[4811]: I0216 21:38:28.703517 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:38:29 crc kubenswrapper[4811]: I0216 21:38:29.742579 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"8826db22478a9264a5b7ed8387e5eca0a5e6596581cb174b0034beb59f99f9d4"} Feb 16 21:38:31 crc kubenswrapper[4811]: E0216 21:38:31.705933 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:38:46 crc kubenswrapper[4811]: E0216 21:38:46.710869 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:38:57 crc kubenswrapper[4811]: E0216 21:38:57.705907 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:39:08 crc kubenswrapper[4811]: E0216 21:39:08.707182 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:39:22 crc kubenswrapper[4811]: E0216 21:39:22.713612 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:39:37 crc kubenswrapper[4811]: I0216 21:39:37.708747 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:39:37 crc kubenswrapper[4811]: E0216 21:39:37.801036 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:39:37 crc kubenswrapper[4811]: E0216 21:39:37.801126 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:39:37 crc kubenswrapper[4811]: E0216 21:39:37.801344 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:39:37 crc kubenswrapper[4811]: E0216 21:39:37.802585 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:39:51 crc kubenswrapper[4811]: E0216 21:39:51.704300 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:40:03 crc kubenswrapper[4811]: E0216 21:40:03.705526 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:40:15 crc kubenswrapper[4811]: E0216 21:40:15.705170 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:40:26 crc kubenswrapper[4811]: E0216 21:40:26.708187 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:40:40 crc kubenswrapper[4811]: E0216 21:40:40.705593 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:40:48 crc kubenswrapper[4811]: I0216 21:40:48.364263 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:40:48 crc kubenswrapper[4811]: I0216 21:40:48.365049 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:40:54 crc kubenswrapper[4811]: E0216 21:40:54.706158 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:41:05 crc kubenswrapper[4811]: E0216 21:41:05.705531 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:41:18 crc kubenswrapper[4811]: I0216 21:41:18.364156 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:41:18 crc kubenswrapper[4811]: I0216 21:41:18.364725 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:41:20 crc kubenswrapper[4811]: E0216 21:41:20.707491 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:41:31 crc kubenswrapper[4811]: E0216 21:41:31.705732 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:41:45 crc kubenswrapper[4811]: E0216 21:41:45.706469 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:41:48 crc kubenswrapper[4811]: I0216 21:41:48.363619 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:41:48 crc kubenswrapper[4811]: I0216 21:41:48.363959 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4811]: I0216 21:41:48.364014 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:41:48 crc kubenswrapper[4811]: I0216 21:41:48.364991 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8826db22478a9264a5b7ed8387e5eca0a5e6596581cb174b0034beb59f99f9d4"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:41:48 crc kubenswrapper[4811]: I0216 21:41:48.365068 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://8826db22478a9264a5b7ed8387e5eca0a5e6596581cb174b0034beb59f99f9d4" gracePeriod=600 Feb 16 21:41:49 crc kubenswrapper[4811]: I0216 21:41:49.313638 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="8826db22478a9264a5b7ed8387e5eca0a5e6596581cb174b0034beb59f99f9d4" exitCode=0 Feb 16 21:41:49 crc kubenswrapper[4811]: I0216 21:41:49.313706 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"8826db22478a9264a5b7ed8387e5eca0a5e6596581cb174b0034beb59f99f9d4"} Feb 16 21:41:49 crc kubenswrapper[4811]: I0216 21:41:49.314123 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01"} Feb 16 21:41:49 crc kubenswrapper[4811]: I0216 21:41:49.314162 4811 scope.go:117] "RemoveContainer" containerID="34a3044faaa8f2f048b7b97e4a34ddefa619f14952d8a91c391eea60f92a330d" Feb 16 21:41:58 crc kubenswrapper[4811]: E0216 21:41:58.707365 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:42:09 crc kubenswrapper[4811]: E0216 21:42:09.705770 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.923238 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5fhsw"] Feb 16 21:42:21 crc kubenswrapper[4811]: E0216 21:42:21.924353 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="extract-content" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.924370 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="extract-content" Feb 16 21:42:21 crc kubenswrapper[4811]: E0216 21:42:21.924392 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="extract-utilities" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.924400 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="extract-utilities" Feb 16 21:42:21 crc kubenswrapper[4811]: E0216 21:42:21.924440 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="registry-server" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.924448 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="registry-server" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.924656 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b51a98-436b-4715-aa58-192fe6aa9309" containerName="registry-server" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.926515 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:21 crc kubenswrapper[4811]: I0216 21:42:21.943420 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fhsw"] Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.028136 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh54w\" (UniqueName: \"kubernetes.io/projected/67e71532-906c-4b9e-bc64-07ce7135572c-kube-api-access-gh54w\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.028575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-utilities\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.028806 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-catalog-content\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.130653 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh54w\" (UniqueName: \"kubernetes.io/projected/67e71532-906c-4b9e-bc64-07ce7135572c-kube-api-access-gh54w\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.130761 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-utilities\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.130907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-catalog-content\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.131274 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-utilities\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.131388 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-catalog-content\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.165112 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh54w\" (UniqueName: \"kubernetes.io/projected/67e71532-906c-4b9e-bc64-07ce7135572c-kube-api-access-gh54w\") pod \"redhat-marketplace-5fhsw\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.297454 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:22 crc kubenswrapper[4811]: I0216 21:42:22.839494 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fhsw"] Feb 16 21:42:23 crc kubenswrapper[4811]: I0216 21:42:23.673400 4811 generic.go:334] "Generic (PLEG): container finished" podID="67e71532-906c-4b9e-bc64-07ce7135572c" containerID="e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2" exitCode=0 Feb 16 21:42:23 crc kubenswrapper[4811]: I0216 21:42:23.673495 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerDied","Data":"e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2"} Feb 16 21:42:23 crc kubenswrapper[4811]: I0216 21:42:23.673804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerStarted","Data":"1dec53f5ed1120922de7b4787329e6f32b06c1f09ba0559e3168e2f0db634f95"} Feb 16 21:42:23 crc kubenswrapper[4811]: E0216 21:42:23.708485 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:42:24 crc kubenswrapper[4811]: I0216 21:42:24.696096 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerStarted","Data":"13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c"} Feb 16 21:42:25 crc kubenswrapper[4811]: I0216 21:42:25.713973 4811 generic.go:334] "Generic (PLEG): container finished" podID="67e71532-906c-4b9e-bc64-07ce7135572c" containerID="13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c" exitCode=0 Feb 16 21:42:25 crc kubenswrapper[4811]: I0216 21:42:25.714088 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerDied","Data":"13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c"} Feb 16 21:42:26 crc kubenswrapper[4811]: I0216 21:42:26.741710 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerStarted","Data":"eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d"} Feb 16 21:42:26 crc kubenswrapper[4811]: I0216 21:42:26.764954 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5fhsw" podStartSLOduration=3.292175998 podStartE2EDuration="5.764936974s" podCreationTimestamp="2026-02-16 21:42:21 +0000 UTC" firstStartedPulling="2026-02-16 21:42:23.676502135 +0000 UTC m=+2761.605798113" lastFinishedPulling="2026-02-16 21:42:26.149263161 +0000 UTC m=+2764.078559089" observedRunningTime="2026-02-16 21:42:26.764053192 +0000 UTC m=+2764.693349140" watchObservedRunningTime="2026-02-16 21:42:26.764936974 +0000 UTC m=+2764.694232912" Feb 16 21:42:32 crc kubenswrapper[4811]: I0216 21:42:32.298107 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:32 crc kubenswrapper[4811]: I0216 21:42:32.298780 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:32 crc kubenswrapper[4811]: I0216 21:42:32.396346 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:32 crc kubenswrapper[4811]: I0216 21:42:32.887794 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:32 crc kubenswrapper[4811]: I0216 21:42:32.961711 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fhsw"] Feb 16 21:42:34 crc kubenswrapper[4811]: I0216 21:42:34.842485 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5fhsw" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="registry-server" containerID="cri-o://eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d" gracePeriod=2 Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.459107 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.551165 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh54w\" (UniqueName: \"kubernetes.io/projected/67e71532-906c-4b9e-bc64-07ce7135572c-kube-api-access-gh54w\") pod \"67e71532-906c-4b9e-bc64-07ce7135572c\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.551340 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-utilities\") pod \"67e71532-906c-4b9e-bc64-07ce7135572c\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.552376 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-utilities" (OuterVolumeSpecName: "utilities") pod "67e71532-906c-4b9e-bc64-07ce7135572c" (UID: "67e71532-906c-4b9e-bc64-07ce7135572c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.552496 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-catalog-content\") pod \"67e71532-906c-4b9e-bc64-07ce7135572c\" (UID: \"67e71532-906c-4b9e-bc64-07ce7135572c\") " Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.553966 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.561646 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e71532-906c-4b9e-bc64-07ce7135572c-kube-api-access-gh54w" (OuterVolumeSpecName: "kube-api-access-gh54w") pod "67e71532-906c-4b9e-bc64-07ce7135572c" (UID: "67e71532-906c-4b9e-bc64-07ce7135572c"). InnerVolumeSpecName "kube-api-access-gh54w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.577604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e71532-906c-4b9e-bc64-07ce7135572c" (UID: "67e71532-906c-4b9e-bc64-07ce7135572c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.655772 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e71532-906c-4b9e-bc64-07ce7135572c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.655816 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh54w\" (UniqueName: \"kubernetes.io/projected/67e71532-906c-4b9e-bc64-07ce7135572c-kube-api-access-gh54w\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.857250 4811 generic.go:334] "Generic (PLEG): container finished" podID="67e71532-906c-4b9e-bc64-07ce7135572c" containerID="eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d" exitCode=0 Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.857327 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fhsw" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.857334 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerDied","Data":"eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d"} Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.858217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fhsw" event={"ID":"67e71532-906c-4b9e-bc64-07ce7135572c","Type":"ContainerDied","Data":"1dec53f5ed1120922de7b4787329e6f32b06c1f09ba0559e3168e2f0db634f95"} Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.858240 4811 scope.go:117] "RemoveContainer" containerID="eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.896292 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fhsw"] Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.900302 4811 scope.go:117] "RemoveContainer" containerID="13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.903523 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fhsw"] Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.925517 4811 scope.go:117] "RemoveContainer" containerID="e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.996970 4811 scope.go:117] "RemoveContainer" containerID="eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d" Feb 16 21:42:35 crc kubenswrapper[4811]: E0216 21:42:35.997887 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d\": container with ID starting with eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d not found: ID does not exist" containerID="eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.997927 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d"} err="failed to get container status \"eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d\": rpc error: code = NotFound desc = could not find container \"eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d\": container with ID starting with eb86f93b20b29d63414ccc9e3de9af76edf4830c02cf8b49c6e20c51f78ae60d not found: ID does not exist" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.997970 4811 scope.go:117] "RemoveContainer" containerID="13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c" Feb 16 21:42:35 crc kubenswrapper[4811]: E0216 21:42:35.998934 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c\": container with ID starting with 13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c not found: ID does not exist" containerID="13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.998984 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c"} err="failed to get container status \"13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c\": rpc error: code = NotFound desc = could not find container \"13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c\": container with ID starting with 13693325ccd4ac8b0cf6cf19af7a136d8cbf6ae3168426bd182501a2064c066c not found: ID does not exist" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.999020 4811 scope.go:117] "RemoveContainer" containerID="e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2" Feb 16 21:42:35 crc kubenswrapper[4811]: E0216 21:42:35.999368 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2\": container with ID starting with e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2 not found: ID does not exist" containerID="e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2" Feb 16 21:42:35 crc kubenswrapper[4811]: I0216 21:42:35.999423 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2"} err="failed to get container status \"e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2\": rpc error: code = NotFound desc = could not find container \"e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2\": container with ID starting with e475c6b7159c5c8f41b244cbeffcdba9a93be2846d04a543208b8bbdc52772b2 not found: ID does not exist" Feb 16 21:42:36 crc kubenswrapper[4811]: I0216 21:42:36.724955 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" path="/var/lib/kubelet/pods/67e71532-906c-4b9e-bc64-07ce7135572c/volumes" Feb 16 21:42:37 crc kubenswrapper[4811]: E0216 21:42:37.705965 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.057611 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-272g5/must-gather-9khjs"] Feb 16 21:42:43 crc kubenswrapper[4811]: E0216 21:42:43.058582 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="extract-content" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.058600 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="extract-content" Feb 16 21:42:43 crc kubenswrapper[4811]: E0216 21:42:43.058620 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="registry-server" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.058631 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="registry-server" Feb 16 21:42:43 crc kubenswrapper[4811]: E0216 21:42:43.058669 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="extract-utilities" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.058677 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="extract-utilities" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.058942 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e71532-906c-4b9e-bc64-07ce7135572c" containerName="registry-server" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.060337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.062884 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-272g5"/"openshift-service-ca.crt" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.063151 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-272g5"/"kube-root-ca.crt" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.088940 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-272g5/must-gather-9khjs"] Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.174394 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbdj4\" (UniqueName: \"kubernetes.io/projected/a52b47b0-9a9a-4264-bb3f-685b8a948004-kube-api-access-vbdj4\") pod \"must-gather-9khjs\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.174799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a52b47b0-9a9a-4264-bb3f-685b8a948004-must-gather-output\") pod \"must-gather-9khjs\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.276394 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a52b47b0-9a9a-4264-bb3f-685b8a948004-must-gather-output\") pod \"must-gather-9khjs\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.276533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbdj4\" (UniqueName: \"kubernetes.io/projected/a52b47b0-9a9a-4264-bb3f-685b8a948004-kube-api-access-vbdj4\") pod \"must-gather-9khjs\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.277288 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a52b47b0-9a9a-4264-bb3f-685b8a948004-must-gather-output\") pod \"must-gather-9khjs\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.305383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbdj4\" (UniqueName: \"kubernetes.io/projected/a52b47b0-9a9a-4264-bb3f-685b8a948004-kube-api-access-vbdj4\") pod \"must-gather-9khjs\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.381588 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:42:43 crc kubenswrapper[4811]: I0216 21:42:43.960431 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-272g5/must-gather-9khjs"] Feb 16 21:42:44 crc kubenswrapper[4811]: I0216 21:42:44.987443 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/must-gather-9khjs" event={"ID":"a52b47b0-9a9a-4264-bb3f-685b8a948004","Type":"ContainerStarted","Data":"234026dee50d20d3b90cb138164727f744201af40f1f76b7a4cd788565a777e7"} Feb 16 21:42:50 crc kubenswrapper[4811]: E0216 21:42:50.710733 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:42:52 crc kubenswrapper[4811]: I0216 21:42:52.076884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/must-gather-9khjs" event={"ID":"a52b47b0-9a9a-4264-bb3f-685b8a948004","Type":"ContainerStarted","Data":"1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7"} Feb 16 21:42:52 crc kubenswrapper[4811]: I0216 21:42:52.077329 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/must-gather-9khjs" event={"ID":"a52b47b0-9a9a-4264-bb3f-685b8a948004","Type":"ContainerStarted","Data":"18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f"} Feb 16 21:42:52 crc kubenswrapper[4811]: I0216 21:42:52.096995 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-272g5/must-gather-9khjs" podStartSLOduration=1.7042849329999998 podStartE2EDuration="9.096967745s" podCreationTimestamp="2026-02-16 21:42:43 +0000 UTC" firstStartedPulling="2026-02-16 21:42:43.968910828 +0000 UTC m=+2781.898206766" lastFinishedPulling="2026-02-16 21:42:51.36159364 +0000 UTC m=+2789.290889578" observedRunningTime="2026-02-16 21:42:52.088726833 +0000 UTC m=+2790.018022791" watchObservedRunningTime="2026-02-16 21:42:52.096967745 +0000 UTC m=+2790.026263723" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.369863 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-272g5/crc-debug-bx2kr"] Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.372232 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.375089 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-272g5"/"default-dockercfg-wwbn8" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.495371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmf4\" (UniqueName: \"kubernetes.io/projected/c6098058-8a0a-474b-9632-b20b2495ac2f-kube-api-access-vcmf4\") pod \"crc-debug-bx2kr\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.495664 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6098058-8a0a-474b-9632-b20b2495ac2f-host\") pod \"crc-debug-bx2kr\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.599603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmf4\" (UniqueName: \"kubernetes.io/projected/c6098058-8a0a-474b-9632-b20b2495ac2f-kube-api-access-vcmf4\") pod \"crc-debug-bx2kr\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.599695 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6098058-8a0a-474b-9632-b20b2495ac2f-host\") pod \"crc-debug-bx2kr\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.599805 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6098058-8a0a-474b-9632-b20b2495ac2f-host\") pod \"crc-debug-bx2kr\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.625742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmf4\" (UniqueName: \"kubernetes.io/projected/c6098058-8a0a-474b-9632-b20b2495ac2f-kube-api-access-vcmf4\") pod \"crc-debug-bx2kr\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: I0216 21:42:55.693133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:42:55 crc kubenswrapper[4811]: W0216 21:42:55.722449 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6098058_8a0a_474b_9632_b20b2495ac2f.slice/crio-5fbc8a7b17c56619bb4ef9f87ac62d4691bc738ef345ccde2d47ec157fa7d0fc WatchSource:0}: Error finding container 5fbc8a7b17c56619bb4ef9f87ac62d4691bc738ef345ccde2d47ec157fa7d0fc: Status 404 returned error can't find the container with id 5fbc8a7b17c56619bb4ef9f87ac62d4691bc738ef345ccde2d47ec157fa7d0fc Feb 16 21:42:56 crc kubenswrapper[4811]: I0216 21:42:56.118689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/crc-debug-bx2kr" event={"ID":"c6098058-8a0a-474b-9632-b20b2495ac2f","Type":"ContainerStarted","Data":"5fbc8a7b17c56619bb4ef9f87ac62d4691bc738ef345ccde2d47ec157fa7d0fc"} Feb 16 21:43:01 crc kubenswrapper[4811]: E0216 21:43:01.705672 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:43:08 crc kubenswrapper[4811]: I0216 21:43:08.228902 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/crc-debug-bx2kr" event={"ID":"c6098058-8a0a-474b-9632-b20b2495ac2f","Type":"ContainerStarted","Data":"83e95539d73e458dbf02d293a64ae0f4cccbcb570ca41d6b640f7767b5422796"} Feb 16 21:43:08 crc kubenswrapper[4811]: I0216 21:43:08.245345 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-272g5/crc-debug-bx2kr" podStartSLOduration=1.58006598 podStartE2EDuration="13.245325688s" podCreationTimestamp="2026-02-16 21:42:55 +0000 UTC" firstStartedPulling="2026-02-16 21:42:55.724601633 +0000 UTC m=+2793.653897571" lastFinishedPulling="2026-02-16 21:43:07.389861331 +0000 UTC m=+2805.319157279" observedRunningTime="2026-02-16 21:43:08.239790563 +0000 UTC m=+2806.169086521" watchObservedRunningTime="2026-02-16 21:43:08.245325688 +0000 UTC m=+2806.174621626" Feb 16 21:43:15 crc kubenswrapper[4811]: I0216 21:43:15.751987 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2grjq"] Feb 16 21:43:15 crc kubenswrapper[4811]: I0216 21:43:15.755629 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:15 crc kubenswrapper[4811]: I0216 21:43:15.764629 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2grjq"] Feb 16 21:43:15 crc kubenswrapper[4811]: I0216 21:43:15.953382 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-utilities\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:15 crc kubenswrapper[4811]: I0216 21:43:15.953515 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xvm\" (UniqueName: \"kubernetes.io/projected/1bdddd22-868b-45c8-9355-cffde39e92d2-kube-api-access-v8xvm\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:15 crc kubenswrapper[4811]: I0216 21:43:15.953586 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-catalog-content\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.055510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-utilities\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.055624 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8xvm\" (UniqueName: \"kubernetes.io/projected/1bdddd22-868b-45c8-9355-cffde39e92d2-kube-api-access-v8xvm\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.055676 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-catalog-content\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.056307 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-utilities\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.056385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-catalog-content\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.075001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8xvm\" (UniqueName: \"kubernetes.io/projected/1bdddd22-868b-45c8-9355-cffde39e92d2-kube-api-access-v8xvm\") pod \"redhat-operators-2grjq\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.083771 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:16 crc kubenswrapper[4811]: I0216 21:43:16.638846 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2grjq"] Feb 16 21:43:16 crc kubenswrapper[4811]: E0216 21:43:16.709560 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:43:17 crc kubenswrapper[4811]: I0216 21:43:17.327841 4811 generic.go:334] "Generic (PLEG): container finished" podID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerID="a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f" exitCode=0 Feb 16 21:43:17 crc kubenswrapper[4811]: I0216 21:43:17.328066 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerDied","Data":"a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f"} Feb 16 21:43:17 crc kubenswrapper[4811]: I0216 21:43:17.328089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerStarted","Data":"4f455315bfc597e8c1203eff7bbb67355ec15cc55a1de009010a597faf13aeff"} Feb 16 21:43:18 crc kubenswrapper[4811]: I0216 21:43:18.337605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerStarted","Data":"6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66"} Feb 16 21:43:23 crc kubenswrapper[4811]: I0216 21:43:23.383086 4811 generic.go:334] "Generic (PLEG): container finished" podID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerID="6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66" exitCode=0 Feb 16 21:43:23 crc kubenswrapper[4811]: I0216 21:43:23.383156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerDied","Data":"6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66"} Feb 16 21:43:24 crc kubenswrapper[4811]: I0216 21:43:24.394322 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerStarted","Data":"e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b"} Feb 16 21:43:24 crc kubenswrapper[4811]: I0216 21:43:24.418659 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2grjq" podStartSLOduration=2.918397305 podStartE2EDuration="9.418641634s" podCreationTimestamp="2026-02-16 21:43:15 +0000 UTC" firstStartedPulling="2026-02-16 21:43:17.329443387 +0000 UTC m=+2815.258739315" lastFinishedPulling="2026-02-16 21:43:23.829687706 +0000 UTC m=+2821.758983644" observedRunningTime="2026-02-16 21:43:24.411439448 +0000 UTC m=+2822.340735386" watchObservedRunningTime="2026-02-16 21:43:24.418641634 +0000 UTC m=+2822.347937572" Feb 16 21:43:25 crc kubenswrapper[4811]: I0216 21:43:25.410473 4811 generic.go:334] "Generic (PLEG): container finished" podID="c6098058-8a0a-474b-9632-b20b2495ac2f" containerID="83e95539d73e458dbf02d293a64ae0f4cccbcb570ca41d6b640f7767b5422796" exitCode=0 Feb 16 21:43:25 crc kubenswrapper[4811]: I0216 21:43:25.410564 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/crc-debug-bx2kr" event={"ID":"c6098058-8a0a-474b-9632-b20b2495ac2f","Type":"ContainerDied","Data":"83e95539d73e458dbf02d293a64ae0f4cccbcb570ca41d6b640f7767b5422796"} Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.086154 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.086572 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.544530 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.587636 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-272g5/crc-debug-bx2kr"] Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.609927 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-272g5/crc-debug-bx2kr"] Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.718596 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmf4\" (UniqueName: \"kubernetes.io/projected/c6098058-8a0a-474b-9632-b20b2495ac2f-kube-api-access-vcmf4\") pod \"c6098058-8a0a-474b-9632-b20b2495ac2f\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.718791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6098058-8a0a-474b-9632-b20b2495ac2f-host\") pod \"c6098058-8a0a-474b-9632-b20b2495ac2f\" (UID: \"c6098058-8a0a-474b-9632-b20b2495ac2f\") " Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.718847 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6098058-8a0a-474b-9632-b20b2495ac2f-host" (OuterVolumeSpecName: "host") pod "c6098058-8a0a-474b-9632-b20b2495ac2f" (UID: "c6098058-8a0a-474b-9632-b20b2495ac2f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.719394 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6098058-8a0a-474b-9632-b20b2495ac2f-host\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.739939 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6098058-8a0a-474b-9632-b20b2495ac2f-kube-api-access-vcmf4" (OuterVolumeSpecName: "kube-api-access-vcmf4") pod "c6098058-8a0a-474b-9632-b20b2495ac2f" (UID: "c6098058-8a0a-474b-9632-b20b2495ac2f"). InnerVolumeSpecName "kube-api-access-vcmf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:26 crc kubenswrapper[4811]: I0216 21:43:26.821672 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmf4\" (UniqueName: \"kubernetes.io/projected/c6098058-8a0a-474b-9632-b20b2495ac2f-kube-api-access-vcmf4\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.144537 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2grjq" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="registry-server" probeResult="failure" output=< Feb 16 21:43:27 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 21:43:27 crc kubenswrapper[4811]: > Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.429572 4811 scope.go:117] "RemoveContainer" containerID="83e95539d73e458dbf02d293a64ae0f4cccbcb570ca41d6b640f7767b5422796" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.429690 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-bx2kr" Feb 16 21:43:27 crc kubenswrapper[4811]: E0216 21:43:27.760561 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-conmon-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6098058_8a0a_474b_9632_b20b2495ac2f.slice/crio-5fbc8a7b17c56619bb4ef9f87ac62d4691bc738ef345ccde2d47ec157fa7d0fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.914304 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-272g5/crc-debug-lk8g2"] Feb 16 21:43:27 crc kubenswrapper[4811]: E0216 21:43:27.914816 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6098058-8a0a-474b-9632-b20b2495ac2f" containerName="container-00" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.914839 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6098058-8a0a-474b-9632-b20b2495ac2f" containerName="container-00" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.915079 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6098058-8a0a-474b-9632-b20b2495ac2f" containerName="container-00" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.915942 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:27 crc kubenswrapper[4811]: I0216 21:43:27.918809 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-272g5"/"default-dockercfg-wwbn8" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.048169 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8a35ec5-6524-43e8-9002-2dc3d874daf4-host\") pod \"crc-debug-lk8g2\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.048410 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz79d\" (UniqueName: \"kubernetes.io/projected/b8a35ec5-6524-43e8-9002-2dc3d874daf4-kube-api-access-fz79d\") pod \"crc-debug-lk8g2\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.149925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz79d\" (UniqueName: \"kubernetes.io/projected/b8a35ec5-6524-43e8-9002-2dc3d874daf4-kube-api-access-fz79d\") pod \"crc-debug-lk8g2\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.150041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8a35ec5-6524-43e8-9002-2dc3d874daf4-host\") pod \"crc-debug-lk8g2\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.150164 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8a35ec5-6524-43e8-9002-2dc3d874daf4-host\") pod \"crc-debug-lk8g2\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.172350 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz79d\" (UniqueName: \"kubernetes.io/projected/b8a35ec5-6524-43e8-9002-2dc3d874daf4-kube-api-access-fz79d\") pod \"crc-debug-lk8g2\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.240655 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:28 crc kubenswrapper[4811]: W0216 21:43:28.271243 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a35ec5_6524_43e8_9002_2dc3d874daf4.slice/crio-72ca7a6498465f1cf52820a101d0422ad57f18a7f92135ec0b1c70e052c3fb48 WatchSource:0}: Error finding container 72ca7a6498465f1cf52820a101d0422ad57f18a7f92135ec0b1c70e052c3fb48: Status 404 returned error can't find the container with id 72ca7a6498465f1cf52820a101d0422ad57f18a7f92135ec0b1c70e052c3fb48 Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.440796 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/crc-debug-lk8g2" event={"ID":"b8a35ec5-6524-43e8-9002-2dc3d874daf4","Type":"ContainerStarted","Data":"72ca7a6498465f1cf52820a101d0422ad57f18a7f92135ec0b1c70e052c3fb48"} Feb 16 21:43:28 crc kubenswrapper[4811]: I0216 21:43:28.714919 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6098058-8a0a-474b-9632-b20b2495ac2f" path="/var/lib/kubelet/pods/c6098058-8a0a-474b-9632-b20b2495ac2f/volumes" Feb 16 21:43:29 crc kubenswrapper[4811]: I0216 21:43:29.449957 4811 generic.go:334] "Generic (PLEG): container finished" podID="b8a35ec5-6524-43e8-9002-2dc3d874daf4" containerID="268cb255950ae697600b841a981f1d578496d7b775f1143141787bda2749797d" exitCode=1 Feb 16 21:43:29 crc kubenswrapper[4811]: I0216 21:43:29.450012 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/crc-debug-lk8g2" event={"ID":"b8a35ec5-6524-43e8-9002-2dc3d874daf4","Type":"ContainerDied","Data":"268cb255950ae697600b841a981f1d578496d7b775f1143141787bda2749797d"} Feb 16 21:43:29 crc kubenswrapper[4811]: I0216 21:43:29.502507 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-272g5/crc-debug-lk8g2"] Feb 16 21:43:29 crc kubenswrapper[4811]: I0216 21:43:29.515825 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-272g5/crc-debug-lk8g2"] Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.591008 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.696010 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz79d\" (UniqueName: \"kubernetes.io/projected/b8a35ec5-6524-43e8-9002-2dc3d874daf4-kube-api-access-fz79d\") pod \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.696304 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8a35ec5-6524-43e8-9002-2dc3d874daf4-host\") pod \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\" (UID: \"b8a35ec5-6524-43e8-9002-2dc3d874daf4\") " Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.696403 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8a35ec5-6524-43e8-9002-2dc3d874daf4-host" (OuterVolumeSpecName: "host") pod "b8a35ec5-6524-43e8-9002-2dc3d874daf4" (UID: "b8a35ec5-6524-43e8-9002-2dc3d874daf4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.697046 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8a35ec5-6524-43e8-9002-2dc3d874daf4-host\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.710948 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a35ec5-6524-43e8-9002-2dc3d874daf4-kube-api-access-fz79d" (OuterVolumeSpecName: "kube-api-access-fz79d") pod "b8a35ec5-6524-43e8-9002-2dc3d874daf4" (UID: "b8a35ec5-6524-43e8-9002-2dc3d874daf4"). InnerVolumeSpecName "kube-api-access-fz79d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.714174 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a35ec5-6524-43e8-9002-2dc3d874daf4" path="/var/lib/kubelet/pods/b8a35ec5-6524-43e8-9002-2dc3d874daf4/volumes" Feb 16 21:43:30 crc kubenswrapper[4811]: I0216 21:43:30.798857 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz79d\" (UniqueName: \"kubernetes.io/projected/b8a35ec5-6524-43e8-9002-2dc3d874daf4-kube-api-access-fz79d\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:31 crc kubenswrapper[4811]: I0216 21:43:31.471285 4811 scope.go:117] "RemoveContainer" containerID="268cb255950ae697600b841a981f1d578496d7b775f1143141787bda2749797d" Feb 16 21:43:31 crc kubenswrapper[4811]: I0216 21:43:31.471348 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/crc-debug-lk8g2" Feb 16 21:43:31 crc kubenswrapper[4811]: E0216 21:43:31.704456 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:43:37 crc kubenswrapper[4811]: I0216 21:43:37.144916 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2grjq" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="registry-server" probeResult="failure" output=< Feb 16 21:43:37 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 16 21:43:37 crc kubenswrapper[4811]: > Feb 16 21:43:38 crc kubenswrapper[4811]: E0216 21:43:38.014968 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-conmon-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:43:46 crc kubenswrapper[4811]: I0216 21:43:46.144448 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:46 crc kubenswrapper[4811]: I0216 21:43:46.211836 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:46 crc kubenswrapper[4811]: E0216 21:43:46.705271 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:43:46 crc kubenswrapper[4811]: I0216 21:43:46.952655 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2grjq"] Feb 16 21:43:47 crc kubenswrapper[4811]: I0216 21:43:47.637325 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2grjq" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="registry-server" containerID="cri-o://e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b" gracePeriod=2 Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.198231 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:48 crc kubenswrapper[4811]: E0216 21:43:48.252530 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-conmon-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.279254 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-catalog-content\") pod \"1bdddd22-868b-45c8-9355-cffde39e92d2\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.279436 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8xvm\" (UniqueName: \"kubernetes.io/projected/1bdddd22-868b-45c8-9355-cffde39e92d2-kube-api-access-v8xvm\") pod \"1bdddd22-868b-45c8-9355-cffde39e92d2\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.279532 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-utilities\") pod \"1bdddd22-868b-45c8-9355-cffde39e92d2\" (UID: \"1bdddd22-868b-45c8-9355-cffde39e92d2\") " Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.280980 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-utilities" (OuterVolumeSpecName: "utilities") pod "1bdddd22-868b-45c8-9355-cffde39e92d2" (UID: "1bdddd22-868b-45c8-9355-cffde39e92d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.297554 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bdddd22-868b-45c8-9355-cffde39e92d2-kube-api-access-v8xvm" (OuterVolumeSpecName: "kube-api-access-v8xvm") pod "1bdddd22-868b-45c8-9355-cffde39e92d2" (UID: "1bdddd22-868b-45c8-9355-cffde39e92d2"). InnerVolumeSpecName "kube-api-access-v8xvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.364582 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.364649 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.381912 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8xvm\" (UniqueName: \"kubernetes.io/projected/1bdddd22-868b-45c8-9355-cffde39e92d2-kube-api-access-v8xvm\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.382021 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.424639 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bdddd22-868b-45c8-9355-cffde39e92d2" (UID: "1bdddd22-868b-45c8-9355-cffde39e92d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.484170 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdddd22-868b-45c8-9355-cffde39e92d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.652053 4811 generic.go:334] "Generic (PLEG): container finished" podID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerID="e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b" exitCode=0 Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.652108 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerDied","Data":"e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b"} Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.652144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2grjq" event={"ID":"1bdddd22-868b-45c8-9355-cffde39e92d2","Type":"ContainerDied","Data":"4f455315bfc597e8c1203eff7bbb67355ec15cc55a1de009010a597faf13aeff"} Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.652168 4811 scope.go:117] "RemoveContainer" containerID="e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.652232 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2grjq" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.680160 4811 scope.go:117] "RemoveContainer" containerID="6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.717742 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2grjq"] Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.725396 4811 scope.go:117] "RemoveContainer" containerID="a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.727915 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2grjq"] Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.757250 4811 scope.go:117] "RemoveContainer" containerID="e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b" Feb 16 21:43:48 crc kubenswrapper[4811]: E0216 21:43:48.758380 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b\": container with ID starting with e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b not found: ID does not exist" containerID="e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.758410 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b"} err="failed to get container status \"e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b\": rpc error: code = NotFound desc = could not find container \"e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b\": container with ID starting with e5d1003aeeec870def9b144d44c8dea6534df09e22bb95ea4da00f194c99dc0b not found: ID does not exist" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.758451 4811 scope.go:117] "RemoveContainer" containerID="6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66" Feb 16 21:43:48 crc kubenswrapper[4811]: E0216 21:43:48.758909 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66\": container with ID starting with 6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66 not found: ID does not exist" containerID="6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.758956 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66"} err="failed to get container status \"6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66\": rpc error: code = NotFound desc = could not find container \"6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66\": container with ID starting with 6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66 not found: ID does not exist" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.758986 4811 scope.go:117] "RemoveContainer" containerID="a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f" Feb 16 21:43:48 crc kubenswrapper[4811]: E0216 21:43:48.759352 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f\": container with ID starting with a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f not found: ID does not exist" containerID="a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f" Feb 16 21:43:48 crc kubenswrapper[4811]: I0216 21:43:48.759383 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f"} err="failed to get container status \"a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f\": rpc error: code = NotFound desc = could not find container \"a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f\": container with ID starting with a78ccd3397da69ebfd6821b743383f333153aec122f5aeb09693196a226e474f not found: ID does not exist" Feb 16 21:43:50 crc kubenswrapper[4811]: I0216 21:43:50.714719 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" path="/var/lib/kubelet/pods/1bdddd22-868b-45c8-9355-cffde39e92d2/volumes" Feb 16 21:43:58 crc kubenswrapper[4811]: E0216 21:43:58.532655 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-conmon-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:43:59 crc kubenswrapper[4811]: E0216 21:43:59.706084 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:44:08 crc kubenswrapper[4811]: E0216 21:44:08.791053 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-conmon-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:44:10 crc kubenswrapper[4811]: E0216 21:44:10.705112 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:44:18 crc kubenswrapper[4811]: I0216 21:44:18.363842 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:44:18 crc kubenswrapper[4811]: I0216 21:44:18.364298 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:44:19 crc kubenswrapper[4811]: E0216 21:44:19.059167 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-conmon-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdddd22_868b_45c8_9355_cffde39e92d2.slice/crio-6bf9a24ce2233b550010f051131058b4e505d959b4b2ed336fd45bfc0bad1c66.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:44:22 crc kubenswrapper[4811]: E0216 21:44:22.737289 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:44:23 crc kubenswrapper[4811]: I0216 21:44:23.604043 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_117fc5a2-d29b-4844-9dc6-4359d1c4c24d/init-config-reloader/0.log" Feb 16 21:44:23 crc kubenswrapper[4811]: I0216 21:44:23.904170 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_117fc5a2-d29b-4844-9dc6-4359d1c4c24d/init-config-reloader/0.log" Feb 16 21:44:23 crc kubenswrapper[4811]: I0216 21:44:23.965412 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_117fc5a2-d29b-4844-9dc6-4359d1c4c24d/config-reloader/0.log" Feb 16 21:44:23 crc kubenswrapper[4811]: I0216 21:44:23.999381 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_117fc5a2-d29b-4844-9dc6-4359d1c4c24d/alertmanager/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.081326 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77895df746-7lfzq_715afebd-20b0-4059-953f-aee92f9562f9/barbican-api/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.170979 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77895df746-7lfzq_715afebd-20b0-4059-953f-aee92f9562f9/barbican-api-log/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.228224 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8547d7757d-rdzdb_0dfcebd2-4ec9-463d-9ce6-801911550f42/barbican-keystone-listener/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.318318 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8547d7757d-rdzdb_0dfcebd2-4ec9-463d-9ce6-801911550f42/barbican-keystone-listener-log/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.391845 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f445f5bc-kgwdh_44198ae0-a1f3-4eee-bcba-4898da249e24/barbican-worker/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.429421 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f445f5bc-kgwdh_44198ae0-a1f3-4eee-bcba-4898da249e24/barbican-worker-log/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.630723 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f889b0d1-bc4c-4eeb-a4bf-789d313c1055/ceilometer-notification-agent/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.631593 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f889b0d1-bc4c-4eeb-a4bf-789d313c1055/ceilometer-central-agent/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.663992 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f889b0d1-bc4c-4eeb-a4bf-789d313c1055/proxy-httpd/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.728920 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f889b0d1-bc4c-4eeb-a4bf-789d313c1055/sg-core/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.844476 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_26515dac-f971-477b-b088-1f656ddc3f62/cinder-api/0.log" Feb 16 21:44:24 crc kubenswrapper[4811]: I0216 21:44:24.895490 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_26515dac-f971-477b-b088-1f656ddc3f62/cinder-api-log/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.023356 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d14d491d-bfdb-47df-92b3-e57f805e415f/cinder-scheduler/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.176866 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d14d491d-bfdb-47df-92b3-e57f805e415f/probe/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.308637 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_1d41079d-f556-47e9-bc54-75dc6461451e/loki-compactor/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.375786 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-68wjj_74ea8ac5-2a83-484e-b8bc-ddf8c7045e00/loki-distributor/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.505245 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-kx7gd_a4296913-66bb-481c-a5a8-b667e191ae73/gateway/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.590943 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-pcfgn_abc1ac44-b93b-4a99-af90-c0b9c9839e96/gateway/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.698581 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_4058dcf3-ddd9-4d4f-b909-9f7b0323c65a/loki-index-gateway/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.813690 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_5f050753-85f4-413e-92b6-0503db5e7391/loki-ingester/0.log" Feb 16 21:44:25 crc kubenswrapper[4811]: I0216 21:44:25.903295 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-jld6l_cb32d76e-7b43-4a6f-9d01-922be5156eec/loki-querier/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.041839 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-64jqb_5557acf3-367b-4296-a944-d52fb4545738/loki-query-frontend/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.153295 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-kzzxx_6f623a0b-500d-4215-b574-3f4f8234fd64/init/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.304917 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-kzzxx_6f623a0b-500d-4215-b574-3f4f8234fd64/dnsmasq-dns/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.334650 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-kzzxx_6f623a0b-500d-4215-b574-3f4f8234fd64/init/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.519682 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_24e0a62a-333f-499c-b046-62e94e2ff0be/glance-httpd/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.738320 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_24e0a62a-333f-499c-b046-62e94e2ff0be/glance-log/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.806566 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_19e08db9-4ed5-42e9-bf1e-dec8a1906116/glance-httpd/0.log" Feb 16 21:44:26 crc kubenswrapper[4811]: I0216 21:44:26.853109 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_19e08db9-4ed5-42e9-bf1e-dec8a1906116/glance-log/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.038149 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bf9c6cdb6-77vqw_f4009257-0fad-4d48-b144-6faf80ea5e0c/keystone-api/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.064511 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_435ef7b8-9bee-4232-89cf-f8fd9ad487a7/kube-state-metrics/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.337153 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccfcd95-9jxjj_fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae/neutron-api/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.364086 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccfcd95-9jxjj_fd0c8a7f-ec52-41f3-9b5d-4cdffe2f36ae/neutron-httpd/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.654401 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e5641083-7376-4bd9-93fc-d4c78fdf086c/nova-api-log/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.785939 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d4d07cdd-c697-40f4-b8a6-e4fd3719ebe3/nova-cell0-conductor-conductor/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.791494 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e5641083-7376-4bd9-93fc-d4c78fdf086c/nova-api-api/0.log" Feb 16 21:44:27 crc kubenswrapper[4811]: I0216 21:44:27.931613 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f13872df-22e7-4ca1-8b4d-3235e5265f5e/nova-cell1-conductor-conductor/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.065130 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8e9b4d89-6e8c-48b7-8fb7-d21aea07d506/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.217598 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5c06d20a-86c8-4916-b315-971dab244fd9/nova-metadata-log/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.444318 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_22a3ecca-decd-46bd-ae63-25f0c42fba02/nova-scheduler-scheduler/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.578838 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_10a8f77b-e218-4975-9411-8c380eda2c5a/mysql-bootstrap/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.781259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_10a8f77b-e218-4975-9411-8c380eda2c5a/galera/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.790590 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_10a8f77b-e218-4975-9411-8c380eda2c5a/mysql-bootstrap/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.963213 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32a12c18-c799-4092-8ba9-c89b2a5f713a/mysql-bootstrap/0.log" Feb 16 21:44:28 crc kubenswrapper[4811]: I0216 21:44:28.988587 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5c06d20a-86c8-4916-b315-971dab244fd9/nova-metadata-metadata/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.202434 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32a12c18-c799-4092-8ba9-c89b2a5f713a/galera/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.210099 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32a12c18-c799-4092-8ba9-c89b2a5f713a/mysql-bootstrap/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.249742 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_48d3b16f-0a4b-42bc-9443-19ce343df00a/openstackclient/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.397253 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4xj7n_6d8a2432-c873-4ec8-9e02-aaf33ddd6d65/openstack-network-exporter/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.458405 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fktqj_08f73916-0e3c-4ef7-97e7-a13b9923b620/ovsdb-server-init/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.679623 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fktqj_08f73916-0e3c-4ef7-97e7-a13b9923b620/ovsdb-server-init/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.719938 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fktqj_08f73916-0e3c-4ef7-97e7-a13b9923b620/ovsdb-server/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.721671 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fktqj_08f73916-0e3c-4ef7-97e7-a13b9923b620/ovs-vswitchd/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.893703 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qhsfb_b8edc00a-d032-460b-9e97-d784b4fdfe5c/ovn-controller/0.log" Feb 16 21:44:29 crc kubenswrapper[4811]: I0216 21:44:29.961624 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f2fff999-08d2-426d-93a5-39ba9b2ad7ef/openstack-network-exporter/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.036401 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f2fff999-08d2-426d-93a5-39ba9b2ad7ef/ovn-northd/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.158490 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c8c25051-577c-41fd-a7af-fec64121e954/openstack-network-exporter/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.192798 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c8c25051-577c-41fd-a7af-fec64121e954/ovsdbserver-nb/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.355177 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a6c69be-2c47-4bcd-906e-ab109340067b/ovsdbserver-sb/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.379088 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a6c69be-2c47-4bcd-906e-ab109340067b/openstack-network-exporter/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.539322 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5db7fb44c6-5zcls_f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e/placement-api/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.595399 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5db7fb44c6-5zcls_f1d045d5-6ae5-4c3e-aefb-c1fc3e90f64e/placement-log/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.738811 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e994011a-8ba4-4eed-9c4c-5ddac8b43325/init-config-reloader/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.909081 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e994011a-8ba4-4eed-9c4c-5ddac8b43325/config-reloader/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.911184 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e994011a-8ba4-4eed-9c4c-5ddac8b43325/prometheus/0.log" Feb 16 21:44:30 crc kubenswrapper[4811]: I0216 21:44:30.982258 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e994011a-8ba4-4eed-9c4c-5ddac8b43325/init-config-reloader/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.039717 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e994011a-8ba4-4eed-9c4c-5ddac8b43325/thanos-sidecar/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.141223 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_40263486-d6cd-4aa0-9570-affea970096f/setup-container/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.290355 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_40263486-d6cd-4aa0-9570-affea970096f/rabbitmq/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.329827 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cd541633-15e7-4a12-99a4-72637521386d/setup-container/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.341830 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_40263486-d6cd-4aa0-9570-affea970096f/setup-container/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.567076 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cd541633-15e7-4a12-99a4-72637521386d/setup-container/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.584825 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cd541633-15e7-4a12-99a4-72637521386d/rabbitmq/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.738929 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69cc95b6b9-n22wz_a6d638a7-6781-47f5-af27-712f046ec70a/proxy-server/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.764275 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69cc95b6b9-n22wz_a6d638a7-6781-47f5-af27-712f046ec70a/proxy-httpd/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.835956 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7dnxf_8b6c7641-19e7-4831-82d4-8eda499301b7/swift-ring-rebalance/0.log" Feb 16 21:44:31 crc kubenswrapper[4811]: I0216 21:44:31.960879 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/account-auditor/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.055293 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/account-reaper/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.102950 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/account-replicator/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.168386 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/account-server/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.179437 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/container-auditor/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.259187 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/container-replicator/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.290388 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/container-server/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.392995 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/object-auditor/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.400764 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/container-updater/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.475233 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/object-expirer/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.513072 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/object-replicator/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.589293 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/object-updater/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.623127 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/object-server/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.678511 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/rsync/0.log" Feb 16 21:44:32 crc kubenswrapper[4811]: I0216 21:44:32.720470 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3590443c-c5fd-4eec-a144-06cddd956651/swift-recon-cron/0.log" Feb 16 21:44:36 crc kubenswrapper[4811]: I0216 21:44:36.928114 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_211f2606-1d07-4c2d-8533-d53495a99d5b/memcached/0.log" Feb 16 21:44:37 crc kubenswrapper[4811]: E0216 21:44:37.706298 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:44:48 crc kubenswrapper[4811]: I0216 21:44:48.363771 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:44:48 crc kubenswrapper[4811]: I0216 21:44:48.364518 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:44:48 crc kubenswrapper[4811]: I0216 21:44:48.364575 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" Feb 16 21:44:48 crc kubenswrapper[4811]: I0216 21:44:48.365389 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01"} pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:44:48 crc kubenswrapper[4811]: I0216 21:44:48.365449 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" containerID="cri-o://0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" gracePeriod=600 Feb 16 21:44:48 crc kubenswrapper[4811]: E0216 21:44:48.495863 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:44:49 crc kubenswrapper[4811]: I0216 21:44:49.212160 4811 generic.go:334] "Generic (PLEG): container finished" podID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" exitCode=0 Feb 16 21:44:49 crc kubenswrapper[4811]: I0216 21:44:49.212229 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerDied","Data":"0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01"} Feb 16 21:44:49 crc kubenswrapper[4811]: I0216 21:44:49.212268 4811 scope.go:117] "RemoveContainer" containerID="8826db22478a9264a5b7ed8387e5eca0a5e6596581cb174b0034beb59f99f9d4" Feb 16 21:44:49 crc kubenswrapper[4811]: I0216 21:44:49.213134 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:44:49 crc kubenswrapper[4811]: E0216 21:44:49.213649 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:44:50 crc kubenswrapper[4811]: I0216 21:44:50.709749 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:44:50 crc kubenswrapper[4811]: E0216 21:44:50.837731 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:44:50 crc kubenswrapper[4811]: E0216 21:44:50.837795 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:44:50 crc kubenswrapper[4811]: E0216 21:44:50.837920 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:44:50 crc kubenswrapper[4811]: E0216 21:44:50.839940 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:44:56 crc kubenswrapper[4811]: I0216 21:44:56.713390 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/util/0.log" Feb 16 21:44:56 crc kubenswrapper[4811]: I0216 21:44:56.911421 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/util/0.log" Feb 16 21:44:56 crc kubenswrapper[4811]: I0216 21:44:56.928310 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/pull/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.092664 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/pull/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.282372 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/util/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.288837 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/pull/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.476104 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76erk88l_aee284bd-e428-4002-aace-b760dbe7acf3/extract/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.677259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-q89bq_fa86f7ef-e087-4967-acb0-3d5e36d5629e/manager/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.953384 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-lh792_6990871b-47ed-4368-a1f2-f582e0c01e81/manager/0.log" Feb 16 21:44:57 crc kubenswrapper[4811]: I0216 21:44:57.958857 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-jckks_6abf6059-c304-4c75-b9df-89c83549963c/manager/0.log" Feb 16 21:44:58 crc kubenswrapper[4811]: I0216 21:44:58.131220 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-cdqwz_9e676231-474a-4831-a71f-7788b6d15f03/manager/0.log" Feb 16 21:44:58 crc kubenswrapper[4811]: I0216 21:44:58.173814 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-xsbk9_7d5cf64e-0afc-4017-94b9-8fdf40a7cf89/manager/0.log" Feb 16 21:44:58 crc kubenswrapper[4811]: I0216 21:44:58.587073 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-rwl5h_4959d7c9-42b3-479d-a5d9-f2d2a941b57f/manager/0.log" Feb 16 21:44:58 crc kubenswrapper[4811]: I0216 21:44:58.660762 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-qpcgx_df9a27af-f077-408b-8559-29f9c41b7d78/manager/0.log" Feb 16 21:44:58 crc kubenswrapper[4811]: I0216 21:44:58.879664 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-8s6fh_4e16647b-338f-45cf-b590-419a41d36314/manager/0.log" Feb 16 21:44:58 crc kubenswrapper[4811]: I0216 21:44:58.919705 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-ql7fv_8724640a-57a7-402e-9bf8-a40105f068a0/manager/0.log" Feb 16 21:44:59 crc kubenswrapper[4811]: I0216 21:44:59.159871 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-hnr8w_f617ca23-fad3-4ff8-9c11-8a0c34458bb0/manager/0.log" Feb 16 21:44:59 crc kubenswrapper[4811]: I0216 21:44:59.349529 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-dr4l8_c8e2fb2f-471b-4bf5-a57f-1a175da3c9fe/manager/0.log" Feb 16 21:44:59 crc kubenswrapper[4811]: I0216 21:44:59.451522 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-hkcjr_eff6a2d8-85c4-4d00-b10f-f6b8b9266b94/manager/0.log" Feb 16 21:44:59 crc kubenswrapper[4811]: I0216 21:44:59.682736 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cg4rzt_c41c59d7-6daa-4dac-b5f1-22c3886ff6f4/manager/0.log" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.034150 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7dd97cff99-2dj7p_83aacf19-18bb-47e5-a94f-2949859ac9a3/operator/0.log" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.142172 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9"] Feb 16 21:45:00 crc kubenswrapper[4811]: E0216 21:45:00.142604 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="registry-server" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.142627 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="registry-server" Feb 16 21:45:00 crc kubenswrapper[4811]: E0216 21:45:00.142642 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35ec5-6524-43e8-9002-2dc3d874daf4" containerName="container-00" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.142648 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35ec5-6524-43e8-9002-2dc3d874daf4" containerName="container-00" Feb 16 21:45:00 crc kubenswrapper[4811]: E0216 21:45:00.142685 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="extract-utilities" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.142695 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="extract-utilities" Feb 16 21:45:00 crc kubenswrapper[4811]: E0216 21:45:00.142705 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="extract-content" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.142710 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="extract-content" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.143117 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a35ec5-6524-43e8-9002-2dc3d874daf4" containerName="container-00" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.143134 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bdddd22-868b-45c8-9355-cffde39e92d2" containerName="registry-server" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.144340 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.146781 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.151421 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.184357 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9"] Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.231075 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4629x\" (UniqueName: \"kubernetes.io/projected/5c8dc727-c025-4c61-99e7-f27ca6599a02-kube-api-access-4629x\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.231190 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c8dc727-c025-4c61-99e7-f27ca6599a02-secret-volume\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.231273 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dc727-c025-4c61-99e7-f27ca6599a02-config-volume\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.332763 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dc727-c025-4c61-99e7-f27ca6599a02-config-volume\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.332884 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4629x\" (UniqueName: \"kubernetes.io/projected/5c8dc727-c025-4c61-99e7-f27ca6599a02-kube-api-access-4629x\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.333020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c8dc727-c025-4c61-99e7-f27ca6599a02-secret-volume\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.333676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dc727-c025-4c61-99e7-f27ca6599a02-config-volume\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.340736 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c8dc727-c025-4c61-99e7-f27ca6599a02-secret-volume\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.349815 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4629x\" (UniqueName: \"kubernetes.io/projected/5c8dc727-c025-4c61-99e7-f27ca6599a02-kube-api-access-4629x\") pod \"collect-profiles-29521305-ccfg9\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.478120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.518014 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4g8pq_727ccbcc-fd6b-4e49-9905-1e158605c309/registry-server/0.log" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.702702 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:45:00 crc kubenswrapper[4811]: E0216 21:45:00.702957 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.862583 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-q2fkl_81610b83-5cb3-41d5-81c6-a25ed9a86e25/manager/0.log" Feb 16 21:45:00 crc kubenswrapper[4811]: I0216 21:45:00.971307 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9"] Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.174482 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-sx755_88fe4f2b-4703-4c14-bc5d-c5abfec17e62/manager/0.log" Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.329316 4811 generic.go:334] "Generic (PLEG): container finished" podID="5c8dc727-c025-4c61-99e7-f27ca6599a02" containerID="6324e0d7655880192af1ac8907fa1c0005739bd1738d48e8ec9f1252086176b4" exitCode=0 Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.330068 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" event={"ID":"5c8dc727-c025-4c61-99e7-f27ca6599a02","Type":"ContainerDied","Data":"6324e0d7655880192af1ac8907fa1c0005739bd1738d48e8ec9f1252086176b4"} Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.331711 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" event={"ID":"5c8dc727-c025-4c61-99e7-f27ca6599a02","Type":"ContainerStarted","Data":"33370ccb5ab945fedb370cc13d8d1b3510406387c74ee937526e373f953473eb"} Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.358934 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-46224_0c7bb0d1-f8b1-4e01-8001-354628802f27/operator/0.log" Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.396688 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-5rm5q_aab10670-0381-43b4-b9a6-e6c1c86fb4a7/manager/0.log" Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.648074 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-fbz66_017b9ae7-6bf5-4781-a73e-293edb18f921/manager/0.log" Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.659614 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86b9cf86d-v9slw_e563cc5f-d10d-4fe9-8ce0-c0774dfc21b2/manager/0.log" Feb 16 21:45:01 crc kubenswrapper[4811]: I0216 21:45:01.824821 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-2n6tm_e2f72d19-6fc4-4fb7-8ebc-b089dc0e8231/manager/0.log" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.017698 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-s84xq_7f0ce1fe-b0a7-4637-927c-350d6a383cab/manager/0.log" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.210809 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7d4dd64c87-cqrfg_21df9513-6f5c-45d7-b7d7-4a901037433a/manager/0.log" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.781497 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.885570 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c8dc727-c025-4c61-99e7-f27ca6599a02-secret-volume\") pod \"5c8dc727-c025-4c61-99e7-f27ca6599a02\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.885822 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4629x\" (UniqueName: \"kubernetes.io/projected/5c8dc727-c025-4c61-99e7-f27ca6599a02-kube-api-access-4629x\") pod \"5c8dc727-c025-4c61-99e7-f27ca6599a02\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.885995 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dc727-c025-4c61-99e7-f27ca6599a02-config-volume\") pod \"5c8dc727-c025-4c61-99e7-f27ca6599a02\" (UID: \"5c8dc727-c025-4c61-99e7-f27ca6599a02\") " Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.887045 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c8dc727-c025-4c61-99e7-f27ca6599a02-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c8dc727-c025-4c61-99e7-f27ca6599a02" (UID: "5c8dc727-c025-4c61-99e7-f27ca6599a02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.893233 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8dc727-c025-4c61-99e7-f27ca6599a02-kube-api-access-4629x" (OuterVolumeSpecName: "kube-api-access-4629x") pod "5c8dc727-c025-4c61-99e7-f27ca6599a02" (UID: "5c8dc727-c025-4c61-99e7-f27ca6599a02"). InnerVolumeSpecName "kube-api-access-4629x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.892985 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8dc727-c025-4c61-99e7-f27ca6599a02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c8dc727-c025-4c61-99e7-f27ca6599a02" (UID: "5c8dc727-c025-4c61-99e7-f27ca6599a02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.991887 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c8dc727-c025-4c61-99e7-f27ca6599a02-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.991926 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4629x\" (UniqueName: \"kubernetes.io/projected/5c8dc727-c025-4c61-99e7-f27ca6599a02-kube-api-access-4629x\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:02 crc kubenswrapper[4811]: I0216 21:45:02.991939 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dc727-c025-4c61-99e7-f27ca6599a02-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:03 crc kubenswrapper[4811]: I0216 21:45:03.349436 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" event={"ID":"5c8dc727-c025-4c61-99e7-f27ca6599a02","Type":"ContainerDied","Data":"33370ccb5ab945fedb370cc13d8d1b3510406387c74ee937526e373f953473eb"} Feb 16 21:45:03 crc kubenswrapper[4811]: I0216 21:45:03.349487 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33370ccb5ab945fedb370cc13d8d1b3510406387c74ee937526e373f953473eb" Feb 16 21:45:03 crc kubenswrapper[4811]: I0216 21:45:03.349564 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-ccfg9" Feb 16 21:45:03 crc kubenswrapper[4811]: I0216 21:45:03.859527 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l"] Feb 16 21:45:03 crc kubenswrapper[4811]: I0216 21:45:03.872743 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-8xv4l"] Feb 16 21:45:03 crc kubenswrapper[4811]: I0216 21:45:03.888371 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-4m2sl_a930b399-b523-4186-8bf8-c9f071a52b0d/manager/0.log" Feb 16 21:45:04 crc kubenswrapper[4811]: E0216 21:45:04.705810 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:45:04 crc kubenswrapper[4811]: I0216 21:45:04.717859 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b12cc0f-d02f-4db6-8937-190156d483ff" path="/var/lib/kubelet/pods/4b12cc0f-d02f-4db6-8937-190156d483ff/volumes" Feb 16 21:45:15 crc kubenswrapper[4811]: I0216 21:45:15.702757 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:45:15 crc kubenswrapper[4811]: E0216 21:45:15.703615 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:45:18 crc kubenswrapper[4811]: E0216 21:45:18.705919 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:45:21 crc kubenswrapper[4811]: I0216 21:45:21.435986 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qnxsg_b848efbd-79a2-4b6b-a42f-36f109a33e01/control-plane-machine-set-operator/0.log" Feb 16 21:45:21 crc kubenswrapper[4811]: I0216 21:45:21.583435 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gx777_90270354-a779-4378-8bca-c2ff51ecac2e/machine-api-operator/0.log" Feb 16 21:45:21 crc kubenswrapper[4811]: I0216 21:45:21.591057 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gx777_90270354-a779-4378-8bca-c2ff51ecac2e/kube-rbac-proxy/0.log" Feb 16 21:45:28 crc kubenswrapper[4811]: I0216 21:45:28.703345 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:45:28 crc kubenswrapper[4811]: E0216 21:45:28.704161 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:45:31 crc kubenswrapper[4811]: E0216 21:45:31.707412 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:45:34 crc kubenswrapper[4811]: I0216 21:45:34.325952 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vb859_eae3ad4d-9c2f-42b0-aba5-349aee77959c/cert-manager-controller/0.log" Feb 16 21:45:34 crc kubenswrapper[4811]: I0216 21:45:34.518869 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdx5v_3a9cec30-249b-4b05-a7d3-1722bf778309/cert-manager-cainjector/0.log" Feb 16 21:45:34 crc kubenswrapper[4811]: I0216 21:45:34.617567 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gvrsm_5f8737f9-432a-4461-b9ae-990b294ad123/cert-manager-webhook/0.log" Feb 16 21:45:41 crc kubenswrapper[4811]: I0216 21:45:41.702757 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:45:41 crc kubenswrapper[4811]: E0216 21:45:41.703479 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:45:45 crc kubenswrapper[4811]: E0216 21:45:45.705478 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:45:47 crc kubenswrapper[4811]: I0216 21:45:47.894877 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-n8wkn_57c3d8d6-3964-4cdd-ad7b-270a01966704/nmstate-console-plugin/0.log" Feb 16 21:45:48 crc kubenswrapper[4811]: I0216 21:45:48.075263 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v68rf_11976713-8674-4ea3-829a-b5ce035052bb/nmstate-handler/0.log" Feb 16 21:45:48 crc kubenswrapper[4811]: I0216 21:45:48.142307 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-9jlhr_4f24dbb8-fb2e-4076-a050-2fbcdbbceefd/kube-rbac-proxy/0.log" Feb 16 21:45:48 crc kubenswrapper[4811]: I0216 21:45:48.292280 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-9jlhr_4f24dbb8-fb2e-4076-a050-2fbcdbbceefd/nmstate-metrics/0.log" Feb 16 21:45:48 crc kubenswrapper[4811]: I0216 21:45:48.325819 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-jdk8r_97cb0194-bca8-4074-bf79-c7827cdd12a4/nmstate-operator/0.log" Feb 16 21:45:48 crc kubenswrapper[4811]: I0216 21:45:48.486161 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-dfn87_6655fbcb-d36f-45c8-a8b9-233070bddb6e/nmstate-webhook/0.log" Feb 16 21:45:54 crc kubenswrapper[4811]: I0216 21:45:54.166684 4811 scope.go:117] "RemoveContainer" containerID="6e45156d038066e6f2d7bde1687a7eb781aaedc086dcb461f6b73c515df55a28" Feb 16 21:45:56 crc kubenswrapper[4811]: I0216 21:45:56.703860 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:45:56 crc kubenswrapper[4811]: E0216 21:45:56.704941 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:45:57 crc kubenswrapper[4811]: E0216 21:45:57.706385 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:46:02 crc kubenswrapper[4811]: I0216 21:46:02.992166 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65947bdd9b-jmw6n_d8eaf998-04df-433c-93e9-df5a9261330d/manager/0.log" Feb 16 21:46:02 crc kubenswrapper[4811]: I0216 21:46:02.997087 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65947bdd9b-jmw6n_d8eaf998-04df-433c-93e9-df5a9261330d/kube-rbac-proxy/0.log" Feb 16 21:46:08 crc kubenswrapper[4811]: I0216 21:46:08.703699 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:46:08 crc kubenswrapper[4811]: E0216 21:46:08.704627 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:46:09 crc kubenswrapper[4811]: E0216 21:46:09.704948 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:46:17 crc kubenswrapper[4811]: I0216 21:46:17.887696 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-88jqj_dc3ef150-066b-4fd0-bff2-4606e25694e4/prometheus-operator/0.log" Feb 16 21:46:18 crc kubenswrapper[4811]: I0216 21:46:18.134006 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_42b00ab7-c05d-40bc-a605-10d2bc710ec5/prometheus-operator-admission-webhook/0.log" Feb 16 21:46:18 crc kubenswrapper[4811]: I0216 21:46:18.165670 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_dbbd192e-9df8-40ec-9397-f9eebf6b9111/prometheus-operator-admission-webhook/0.log" Feb 16 21:46:18 crc kubenswrapper[4811]: I0216 21:46:18.319313 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4z4hh_bba265f5-85c6-4130-a470-839286f95d5b/operator/0.log" Feb 16 21:46:18 crc kubenswrapper[4811]: I0216 21:46:18.376591 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8r4zv_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a/perses-operator/0.log" Feb 16 21:46:22 crc kubenswrapper[4811]: I0216 21:46:22.741008 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:46:22 crc kubenswrapper[4811]: E0216 21:46:22.741707 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:46:23 crc kubenswrapper[4811]: E0216 21:46:23.705469 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:46:33 crc kubenswrapper[4811]: I0216 21:46:33.664169 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-pnpzd_fa405c88-9fb5-47e9-b4e6-70813ede9574/kube-rbac-proxy/0.log" Feb 16 21:46:33 crc kubenswrapper[4811]: I0216 21:46:33.797654 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-pnpzd_fa405c88-9fb5-47e9-b4e6-70813ede9574/controller/0.log" Feb 16 21:46:33 crc kubenswrapper[4811]: I0216 21:46:33.847236 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-frr-files/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.080906 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-reloader/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.088709 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-frr-files/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.091179 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-metrics/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.092526 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-reloader/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.244958 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-reloader/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.255456 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-frr-files/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.304994 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-metrics/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.360849 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-metrics/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.502910 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-frr-files/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.511551 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-reloader/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.561369 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/cp-metrics/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.608419 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/controller/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.761158 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/frr-metrics/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.797318 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/kube-rbac-proxy/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.805803 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/kube-rbac-proxy-frr/0.log" Feb 16 21:46:34 crc kubenswrapper[4811]: I0216 21:46:34.991535 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/reloader/0.log" Feb 16 21:46:35 crc kubenswrapper[4811]: I0216 21:46:35.011394 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-5ndl9_b0926b4d-4fed-4543-abd8-1e1cc65983f6/frr-k8s-webhook-server/0.log" Feb 16 21:46:35 crc kubenswrapper[4811]: I0216 21:46:35.505170 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-84454db595-l2tp8_4d8c2078-639b-41a8-9ac9-58e8b6315d05/manager/0.log" Feb 16 21:46:35 crc kubenswrapper[4811]: I0216 21:46:35.713354 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qvj4b_dfcb7d78-0504-47c4-a5bc-05f382feefaa/kube-rbac-proxy/0.log" Feb 16 21:46:35 crc kubenswrapper[4811]: I0216 21:46:35.762705 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7899c768f-d4x8l_e42eebc8-e49d-4af3-9ab6-c2c2ca258e81/webhook-server/0.log" Feb 16 21:46:35 crc kubenswrapper[4811]: I0216 21:46:35.776267 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwqbc_dc88841d-ab30-446e-a4f1-f7e37902c90d/frr/0.log" Feb 16 21:46:36 crc kubenswrapper[4811]: I0216 21:46:36.231215 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qvj4b_dfcb7d78-0504-47c4-a5bc-05f382feefaa/speaker/0.log" Feb 16 21:46:36 crc kubenswrapper[4811]: I0216 21:46:36.707540 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:46:36 crc kubenswrapper[4811]: E0216 21:46:36.708015 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:46:36 crc kubenswrapper[4811]: E0216 21:46:36.708516 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:46:49 crc kubenswrapper[4811]: E0216 21:46:49.704872 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.301695 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/util/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.553745 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/util/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.581352 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/pull/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.601030 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/pull/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.762770 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/util/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.818182 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/extract/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.856131 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651q8kv5_d2a06619-ff67-4a17-b2fa-b3e9f6f45345/pull/0.log" Feb 16 21:46:50 crc kubenswrapper[4811]: I0216 21:46:50.979274 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/util/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.165083 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/pull/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.202778 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/util/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.231715 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/pull/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.397805 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/pull/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.407793 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/util/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.412961 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084hxkp_578040b1-e6b6-4064-a8fc-ee5635df7eee/extract/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.618297 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/util/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.703035 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:46:51 crc kubenswrapper[4811]: E0216 21:46:51.703562 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.843942 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/pull/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.865626 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/pull/0.log" Feb 16 21:46:51 crc kubenswrapper[4811]: I0216 21:46:51.984207 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/util/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.151983 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/pull/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.160970 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/extract/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.179967 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213scbg5_e179a5d8-431a-42cf-b2cc-848631cb784a/util/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.350465 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/extract-utilities/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.516638 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/extract-utilities/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.525648 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/extract-content/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.553067 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/extract-content/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.759741 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/extract-utilities/0.log" Feb 16 21:46:52 crc kubenswrapper[4811]: I0216 21:46:52.820792 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/extract-content/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.058488 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/extract-utilities/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.100898 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-92bf7_6fb34ae7-4d56-44b0-9db6-c890b1d57fdf/registry-server/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.203468 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/extract-content/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.203695 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/extract-utilities/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.240756 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/extract-content/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.455026 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/extract-content/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.463997 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/extract-utilities/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.694718 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/util/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.883959 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/util/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.919259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/pull/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.950392 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rrgb4_6c21d535-a947-4399-ac26-4d5bcd1ef31f/registry-server/0.log" Feb 16 21:46:53 crc kubenswrapper[4811]: I0216 21:46:53.961568 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/pull/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.176843 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/pull/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.187747 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/extract/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.202557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5s85_fbfe090c-7d12-4a08-ab12-8ee916f0741f/util/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.355936 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gwhxj_6dbdae02-959b-48e1-9297-c76789cdb528/marketplace-operator/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.401863 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/extract-utilities/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.556129 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/extract-utilities/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.576147 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/extract-content/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.598132 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/extract-content/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.758526 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/extract-utilities/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.801158 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/extract-content/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.830555 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/extract-utilities/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.960181 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rfd2c_a7bd7115-3a3e-4312-8543-2f40686cfdb0/registry-server/0.log" Feb 16 21:46:54 crc kubenswrapper[4811]: I0216 21:46:54.980402 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/extract-utilities/0.log" Feb 16 21:46:55 crc kubenswrapper[4811]: I0216 21:46:55.005255 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/extract-content/0.log" Feb 16 21:46:55 crc kubenswrapper[4811]: I0216 21:46:55.031246 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/extract-content/0.log" Feb 16 21:46:55 crc kubenswrapper[4811]: I0216 21:46:55.209874 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/extract-content/0.log" Feb 16 21:46:55 crc kubenswrapper[4811]: I0216 21:46:55.229540 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/extract-utilities/0.log" Feb 16 21:46:55 crc kubenswrapper[4811]: I0216 21:46:55.733965 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fwbbq_a7acaaf0-b18d-4e9b-b9fa-d1c384c879a5/registry-server/0.log" Feb 16 21:47:03 crc kubenswrapper[4811]: I0216 21:47:03.703519 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:47:03 crc kubenswrapper[4811]: E0216 21:47:03.704764 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:47:03 crc kubenswrapper[4811]: E0216 21:47:03.706163 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.256736 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cbm26"] Feb 16 21:47:06 crc kubenswrapper[4811]: E0216 21:47:06.257708 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8dc727-c025-4c61-99e7-f27ca6599a02" containerName="collect-profiles" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.257723 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8dc727-c025-4c61-99e7-f27ca6599a02" containerName="collect-profiles" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.257976 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8dc727-c025-4c61-99e7-f27ca6599a02" containerName="collect-profiles" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.259826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.281068 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbm26"] Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.360748 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-utilities\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.362119 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wk8x\" (UniqueName: \"kubernetes.io/projected/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-kube-api-access-6wk8x\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.362386 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-catalog-content\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.466673 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-catalog-content\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.466737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-utilities\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.466804 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wk8x\" (UniqueName: \"kubernetes.io/projected/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-kube-api-access-6wk8x\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.467235 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-catalog-content\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.467460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-utilities\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.512827 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wk8x\" (UniqueName: \"kubernetes.io/projected/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-kube-api-access-6wk8x\") pod \"community-operators-cbm26\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:06 crc kubenswrapper[4811]: I0216 21:47:06.586826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:07 crc kubenswrapper[4811]: I0216 21:47:07.108843 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbm26"] Feb 16 21:47:07 crc kubenswrapper[4811]: W0216 21:47:07.113974 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0702afbc_5ac5_4d9f_8b3b_1a55fff89cf8.slice/crio-dc0c64a95ba4b48970f71cde37d7226bd610e1fbf7cc5a7c68f97b72f78e1925 WatchSource:0}: Error finding container dc0c64a95ba4b48970f71cde37d7226bd610e1fbf7cc5a7c68f97b72f78e1925: Status 404 returned error can't find the container with id dc0c64a95ba4b48970f71cde37d7226bd610e1fbf7cc5a7c68f97b72f78e1925 Feb 16 21:47:07 crc kubenswrapper[4811]: I0216 21:47:07.536912 4811 generic.go:334] "Generic (PLEG): container finished" podID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerID="644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5" exitCode=0 Feb 16 21:47:07 crc kubenswrapper[4811]: I0216 21:47:07.536960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerDied","Data":"644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5"} Feb 16 21:47:07 crc kubenswrapper[4811]: I0216 21:47:07.537156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerStarted","Data":"dc0c64a95ba4b48970f71cde37d7226bd610e1fbf7cc5a7c68f97b72f78e1925"} Feb 16 21:47:09 crc kubenswrapper[4811]: I0216 21:47:09.567547 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerStarted","Data":"98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269"} Feb 16 21:47:09 crc kubenswrapper[4811]: I0216 21:47:09.945445 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-849f874f67-kc8n4_42b00ab7-c05d-40bc-a605-10d2bc710ec5/prometheus-operator-admission-webhook/0.log" Feb 16 21:47:09 crc kubenswrapper[4811]: I0216 21:47:09.960722 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-849f874f67-zr8hs_dbbd192e-9df8-40ec-9397-f9eebf6b9111/prometheus-operator-admission-webhook/0.log" Feb 16 21:47:09 crc kubenswrapper[4811]: I0216 21:47:09.986093 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-88jqj_dc3ef150-066b-4fd0-bff2-4606e25694e4/prometheus-operator/0.log" Feb 16 21:47:10 crc kubenswrapper[4811]: I0216 21:47:10.141155 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8r4zv_fa4598ee-dd6b-40e2-a925-71d9e3e6c17a/perses-operator/0.log" Feb 16 21:47:10 crc kubenswrapper[4811]: I0216 21:47:10.147000 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4z4hh_bba265f5-85c6-4130-a470-839286f95d5b/operator/0.log" Feb 16 21:47:10 crc kubenswrapper[4811]: I0216 21:47:10.631147 4811 generic.go:334] "Generic (PLEG): container finished" podID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerID="98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269" exitCode=0 Feb 16 21:47:10 crc kubenswrapper[4811]: I0216 21:47:10.631245 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerDied","Data":"98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269"} Feb 16 21:47:11 crc kubenswrapper[4811]: I0216 21:47:11.650820 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerStarted","Data":"4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09"} Feb 16 21:47:11 crc kubenswrapper[4811]: I0216 21:47:11.669508 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cbm26" podStartSLOduration=2.158960138 podStartE2EDuration="5.669486736s" podCreationTimestamp="2026-02-16 21:47:06 +0000 UTC" firstStartedPulling="2026-02-16 21:47:07.539329983 +0000 UTC m=+3045.468625921" lastFinishedPulling="2026-02-16 21:47:11.049856581 +0000 UTC m=+3048.979152519" observedRunningTime="2026-02-16 21:47:11.665974589 +0000 UTC m=+3049.595270527" watchObservedRunningTime="2026-02-16 21:47:11.669486736 +0000 UTC m=+3049.598782694" Feb 16 21:47:14 crc kubenswrapper[4811]: E0216 21:47:14.706679 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:47:16 crc kubenswrapper[4811]: I0216 21:47:16.587285 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:16 crc kubenswrapper[4811]: I0216 21:47:16.587755 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:16 crc kubenswrapper[4811]: I0216 21:47:16.671126 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:16 crc kubenswrapper[4811]: I0216 21:47:16.702919 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:47:16 crc kubenswrapper[4811]: E0216 21:47:16.703172 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:47:16 crc kubenswrapper[4811]: I0216 21:47:16.760171 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:16 crc kubenswrapper[4811]: I0216 21:47:16.907853 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbm26"] Feb 16 21:47:18 crc kubenswrapper[4811]: I0216 21:47:18.726567 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cbm26" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="registry-server" containerID="cri-o://4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09" gracePeriod=2 Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.225498 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.324015 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wk8x\" (UniqueName: \"kubernetes.io/projected/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-kube-api-access-6wk8x\") pod \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.324121 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-utilities\") pod \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.324185 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-catalog-content\") pod \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\" (UID: \"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8\") " Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.325102 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-utilities" (OuterVolumeSpecName: "utilities") pod "0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" (UID: "0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.330719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-kube-api-access-6wk8x" (OuterVolumeSpecName: "kube-api-access-6wk8x") pod "0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" (UID: "0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8"). InnerVolumeSpecName "kube-api-access-6wk8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.386875 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" (UID: "0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.426370 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.426405 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.426421 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wk8x\" (UniqueName: \"kubernetes.io/projected/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8-kube-api-access-6wk8x\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.740047 4811 generic.go:334] "Generic (PLEG): container finished" podID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerID="4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09" exitCode=0 Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.740121 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbm26" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.740136 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerDied","Data":"4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09"} Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.740189 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbm26" event={"ID":"0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8","Type":"ContainerDied","Data":"dc0c64a95ba4b48970f71cde37d7226bd610e1fbf7cc5a7c68f97b72f78e1925"} Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.740229 4811 scope.go:117] "RemoveContainer" containerID="4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.767887 4811 scope.go:117] "RemoveContainer" containerID="98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.796291 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbm26"] Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.804034 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cbm26"] Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.804897 4811 scope.go:117] "RemoveContainer" containerID="644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.872167 4811 scope.go:117] "RemoveContainer" containerID="4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09" Feb 16 21:47:19 crc kubenswrapper[4811]: E0216 21:47:19.878459 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09\": container with ID starting with 4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09 not found: ID does not exist" containerID="4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.878504 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09"} err="failed to get container status \"4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09\": rpc error: code = NotFound desc = could not find container \"4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09\": container with ID starting with 4d3d6baff6ad08b2edf0fb682869e1082a1cfc12963d57bece70a5992edb1d09 not found: ID does not exist" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.878533 4811 scope.go:117] "RemoveContainer" containerID="98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269" Feb 16 21:47:19 crc kubenswrapper[4811]: E0216 21:47:19.880421 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269\": container with ID starting with 98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269 not found: ID does not exist" containerID="98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.880475 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269"} err="failed to get container status \"98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269\": rpc error: code = NotFound desc = could not find container \"98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269\": container with ID starting with 98fa42ff0a2efcf5fe743cec8728ad806bc94e95c6d210ca0d8675e6243b5269 not found: ID does not exist" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.880509 4811 scope.go:117] "RemoveContainer" containerID="644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5" Feb 16 21:47:19 crc kubenswrapper[4811]: E0216 21:47:19.882132 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5\": container with ID starting with 644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5 not found: ID does not exist" containerID="644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5" Feb 16 21:47:19 crc kubenswrapper[4811]: I0216 21:47:19.882166 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5"} err="failed to get container status \"644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5\": rpc error: code = NotFound desc = could not find container \"644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5\": container with ID starting with 644ad4809c4012deeb075c302bfd1fd883761eacc20aab0142270a84666f5ce5 not found: ID does not exist" Feb 16 21:47:20 crc kubenswrapper[4811]: I0216 21:47:20.714922 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" path="/var/lib/kubelet/pods/0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8/volumes" Feb 16 21:47:23 crc kubenswrapper[4811]: I0216 21:47:23.788721 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65947bdd9b-jmw6n_d8eaf998-04df-433c-93e9-df5a9261330d/kube-rbac-proxy/0.log" Feb 16 21:47:23 crc kubenswrapper[4811]: I0216 21:47:23.849400 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65947bdd9b-jmw6n_d8eaf998-04df-433c-93e9-df5a9261330d/manager/0.log" Feb 16 21:47:27 crc kubenswrapper[4811]: I0216 21:47:27.703562 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:47:27 crc kubenswrapper[4811]: E0216 21:47:27.704231 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:47:28 crc kubenswrapper[4811]: E0216 21:47:28.704499 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:47:39 crc kubenswrapper[4811]: I0216 21:47:39.704277 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:47:39 crc kubenswrapper[4811]: E0216 21:47:39.706365 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:47:39 crc kubenswrapper[4811]: E0216 21:47:39.706485 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:47:45 crc kubenswrapper[4811]: E0216 21:47:45.988070 4811 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:45860->38.102.83.9:38523: write tcp 38.102.83.9:45860->38.102.83.9:38523: write: broken pipe Feb 16 21:47:50 crc kubenswrapper[4811]: I0216 21:47:50.712903 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:47:50 crc kubenswrapper[4811]: E0216 21:47:50.714868 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:47:54 crc kubenswrapper[4811]: E0216 21:47:54.704665 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:48:02 crc kubenswrapper[4811]: I0216 21:48:02.708917 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:48:02 crc kubenswrapper[4811]: E0216 21:48:02.710717 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:48:09 crc kubenswrapper[4811]: E0216 21:48:09.705635 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:48:13 crc kubenswrapper[4811]: I0216 21:48:13.703518 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:48:13 crc kubenswrapper[4811]: E0216 21:48:13.705485 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:48:22 crc kubenswrapper[4811]: E0216 21:48:22.711016 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:48:26 crc kubenswrapper[4811]: I0216 21:48:26.705438 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:48:26 crc kubenswrapper[4811]: E0216 21:48:26.707136 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:48:35 crc kubenswrapper[4811]: E0216 21:48:35.704932 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:48:37 crc kubenswrapper[4811]: I0216 21:48:37.709631 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:48:37 crc kubenswrapper[4811]: E0216 21:48:37.710491 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:48:49 crc kubenswrapper[4811]: E0216 21:48:49.704698 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:48:50 crc kubenswrapper[4811]: I0216 21:48:50.705996 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:48:50 crc kubenswrapper[4811]: E0216 21:48:50.706819 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:48:53 crc kubenswrapper[4811]: I0216 21:48:53.753427 4811 generic.go:334] "Generic (PLEG): container finished" podID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerID="18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f" exitCode=0 Feb 16 21:48:53 crc kubenswrapper[4811]: I0216 21:48:53.753590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-272g5/must-gather-9khjs" event={"ID":"a52b47b0-9a9a-4264-bb3f-685b8a948004","Type":"ContainerDied","Data":"18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f"} Feb 16 21:48:53 crc kubenswrapper[4811]: I0216 21:48:53.754519 4811 scope.go:117] "RemoveContainer" containerID="18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f" Feb 16 21:48:53 crc kubenswrapper[4811]: I0216 21:48:53.864803 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-272g5_must-gather-9khjs_a52b47b0-9a9a-4264-bb3f-685b8a948004/gather/0.log" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.232930 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-272g5/must-gather-9khjs"] Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.233698 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-272g5/must-gather-9khjs" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="copy" containerID="cri-o://1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7" gracePeriod=2 Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.245716 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-272g5/must-gather-9khjs"] Feb 16 21:49:01 crc kubenswrapper[4811]: E0216 21:49:01.704783 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.775923 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-272g5_must-gather-9khjs_a52b47b0-9a9a-4264-bb3f-685b8a948004/copy/0.log" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.776456 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.833666 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-272g5_must-gather-9khjs_a52b47b0-9a9a-4264-bb3f-685b8a948004/copy/0.log" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.834015 4811 generic.go:334] "Generic (PLEG): container finished" podID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerID="1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7" exitCode=143 Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.834079 4811 scope.go:117] "RemoveContainer" containerID="1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.834081 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-272g5/must-gather-9khjs" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.834412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbdj4\" (UniqueName: \"kubernetes.io/projected/a52b47b0-9a9a-4264-bb3f-685b8a948004-kube-api-access-vbdj4\") pod \"a52b47b0-9a9a-4264-bb3f-685b8a948004\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.834498 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a52b47b0-9a9a-4264-bb3f-685b8a948004-must-gather-output\") pod \"a52b47b0-9a9a-4264-bb3f-685b8a948004\" (UID: \"a52b47b0-9a9a-4264-bb3f-685b8a948004\") " Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.841384 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52b47b0-9a9a-4264-bb3f-685b8a948004-kube-api-access-vbdj4" (OuterVolumeSpecName: "kube-api-access-vbdj4") pod "a52b47b0-9a9a-4264-bb3f-685b8a948004" (UID: "a52b47b0-9a9a-4264-bb3f-685b8a948004"). InnerVolumeSpecName "kube-api-access-vbdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.914597 4811 scope.go:117] "RemoveContainer" containerID="18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.939145 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbdj4\" (UniqueName: \"kubernetes.io/projected/a52b47b0-9a9a-4264-bb3f-685b8a948004-kube-api-access-vbdj4\") on node \"crc\" DevicePath \"\"" Feb 16 21:49:01 crc kubenswrapper[4811]: I0216 21:49:01.976542 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52b47b0-9a9a-4264-bb3f-685b8a948004-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a52b47b0-9a9a-4264-bb3f-685b8a948004" (UID: "a52b47b0-9a9a-4264-bb3f-685b8a948004"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:49:02 crc kubenswrapper[4811]: I0216 21:49:02.008163 4811 scope.go:117] "RemoveContainer" containerID="1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7" Feb 16 21:49:02 crc kubenswrapper[4811]: E0216 21:49:02.008916 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7\": container with ID starting with 1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7 not found: ID does not exist" containerID="1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7" Feb 16 21:49:02 crc kubenswrapper[4811]: I0216 21:49:02.009042 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7"} err="failed to get container status \"1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7\": rpc error: code = NotFound desc = could not find container \"1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7\": container with ID starting with 1c1c7c29b6b6d04a7d87527a47f7b2efae98d221057827b3cbae486097d4dac7 not found: ID does not exist" Feb 16 21:49:02 crc kubenswrapper[4811]: I0216 21:49:02.009131 4811 scope.go:117] "RemoveContainer" containerID="18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f" Feb 16 21:49:02 crc kubenswrapper[4811]: E0216 21:49:02.009488 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f\": container with ID starting with 18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f not found: ID does not exist" containerID="18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f" Feb 16 21:49:02 crc kubenswrapper[4811]: I0216 21:49:02.009515 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f"} err="failed to get container status \"18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f\": rpc error: code = NotFound desc = could not find container \"18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f\": container with ID starting with 18e3b4ab6c61d74575cea6b46ca49fa57afb0fd2645400a7a004ff4d614b9f7f not found: ID does not exist" Feb 16 21:49:02 crc kubenswrapper[4811]: I0216 21:49:02.041472 4811 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a52b47b0-9a9a-4264-bb3f-685b8a948004-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 21:49:02 crc kubenswrapper[4811]: I0216 21:49:02.715345 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" path="/var/lib/kubelet/pods/a52b47b0-9a9a-4264-bb3f-685b8a948004/volumes" Feb 16 21:49:05 crc kubenswrapper[4811]: I0216 21:49:05.703682 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:49:05 crc kubenswrapper[4811]: E0216 21:49:05.704383 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:49:14 crc kubenswrapper[4811]: E0216 21:49:14.705428 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:49:20 crc kubenswrapper[4811]: I0216 21:49:20.703469 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:49:20 crc kubenswrapper[4811]: E0216 21:49:20.704852 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:49:26 crc kubenswrapper[4811]: E0216 21:49:26.705622 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:49:31 crc kubenswrapper[4811]: I0216 21:49:31.704052 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:49:31 crc kubenswrapper[4811]: E0216 21:49:31.705633 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:49:41 crc kubenswrapper[4811]: E0216 21:49:41.706583 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:49:44 crc kubenswrapper[4811]: I0216 21:49:44.702737 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:49:44 crc kubenswrapper[4811]: E0216 21:49:44.703712 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fh2mx_openshift-machine-config-operator(aa95b3fc-1bfa-44f3-b568-7f325b230c3c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" Feb 16 21:49:55 crc kubenswrapper[4811]: I0216 21:49:55.702982 4811 scope.go:117] "RemoveContainer" containerID="0e57e8de5cbac94b7c873397918330a8fd3de5a9052893854bc9c98ac3579d01" Feb 16 21:49:56 crc kubenswrapper[4811]: I0216 21:49:56.428176 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" event={"ID":"aa95b3fc-1bfa-44f3-b568-7f325b230c3c","Type":"ContainerStarted","Data":"bd5933bd027e4649d1bd9e06b765fef0b76ff594f001d890958e683a124e403c"} Feb 16 21:49:56 crc kubenswrapper[4811]: I0216 21:49:56.706698 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:49:56 crc kubenswrapper[4811]: E0216 21:49:56.845179 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:49:56 crc kubenswrapper[4811]: E0216 21:49:56.845268 4811 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 21:49:56 crc kubenswrapper[4811]: E0216 21:49:56.845425 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s56zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x49kk_openstack(46d0afcb-2a14-4e67-89fc-ed848d1637ce): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:49:56 crc kubenswrapper[4811]: E0216 21:49:56.846843 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:50:09 crc kubenswrapper[4811]: E0216 21:50:09.707322 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:50:23 crc kubenswrapper[4811]: E0216 21:50:23.706816 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:50:38 crc kubenswrapper[4811]: E0216 21:50:38.706286 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:50:50 crc kubenswrapper[4811]: E0216 21:50:50.705994 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:51:03 crc kubenswrapper[4811]: E0216 21:51:03.706037 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:51:15 crc kubenswrapper[4811]: E0216 21:51:15.705332 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:51:29 crc kubenswrapper[4811]: E0216 21:51:29.706421 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:51:40 crc kubenswrapper[4811]: E0216 21:51:40.705307 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:51:52 crc kubenswrapper[4811]: E0216 21:51:52.727992 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.249111 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-srv5s"] Feb 16 21:51:55 crc kubenswrapper[4811]: E0216 21:51:55.250109 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="gather" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250126 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="gather" Feb 16 21:51:55 crc kubenswrapper[4811]: E0216 21:51:55.250164 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="registry-server" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250172 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="registry-server" Feb 16 21:51:55 crc kubenswrapper[4811]: E0216 21:51:55.250191 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="extract-content" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250224 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="extract-content" Feb 16 21:51:55 crc kubenswrapper[4811]: E0216 21:51:55.250246 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="copy" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250254 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="copy" Feb 16 21:51:55 crc kubenswrapper[4811]: E0216 21:51:55.250271 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="extract-utilities" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250280 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="extract-utilities" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250526 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="copy" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250547 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0702afbc-5ac5-4d9f-8b3b-1a55fff89cf8" containerName="registry-server" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.250584 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52b47b0-9a9a-4264-bb3f-685b8a948004" containerName="gather" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.252414 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.283376 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srv5s"] Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.346599 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-utilities\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.346758 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-catalog-content\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.346827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9s6m\" (UniqueName: \"kubernetes.io/projected/28693fbd-61d3-4517-8311-f3b74926ceb3-kube-api-access-x9s6m\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.449055 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9s6m\" (UniqueName: \"kubernetes.io/projected/28693fbd-61d3-4517-8311-f3b74926ceb3-kube-api-access-x9s6m\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.449287 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-utilities\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.449482 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-catalog-content\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.449930 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-utilities\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.450243 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-catalog-content\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.471778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9s6m\" (UniqueName: \"kubernetes.io/projected/28693fbd-61d3-4517-8311-f3b74926ceb3-kube-api-access-x9s6m\") pod \"certified-operators-srv5s\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:55 crc kubenswrapper[4811]: I0216 21:51:55.577766 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:51:56 crc kubenswrapper[4811]: I0216 21:51:56.143804 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srv5s"] Feb 16 21:51:56 crc kubenswrapper[4811]: I0216 21:51:56.831733 4811 generic.go:334] "Generic (PLEG): container finished" podID="28693fbd-61d3-4517-8311-f3b74926ceb3" containerID="a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e" exitCode=0 Feb 16 21:51:56 crc kubenswrapper[4811]: I0216 21:51:56.832074 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerDied","Data":"a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e"} Feb 16 21:51:56 crc kubenswrapper[4811]: I0216 21:51:56.832111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerStarted","Data":"7bf4978d259ca31d62a31687db5f91bdade2ec84118edd6d431cfd79d68015b2"} Feb 16 21:51:57 crc kubenswrapper[4811]: I0216 21:51:57.843063 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerStarted","Data":"ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f"} Feb 16 21:51:59 crc kubenswrapper[4811]: I0216 21:51:59.862872 4811 generic.go:334] "Generic (PLEG): container finished" podID="28693fbd-61d3-4517-8311-f3b74926ceb3" containerID="ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f" exitCode=0 Feb 16 21:51:59 crc kubenswrapper[4811]: I0216 21:51:59.862963 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerDied","Data":"ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f"} Feb 16 21:52:00 crc kubenswrapper[4811]: I0216 21:52:00.874888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerStarted","Data":"d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7"} Feb 16 21:52:00 crc kubenswrapper[4811]: I0216 21:52:00.903335 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-srv5s" podStartSLOduration=2.469956741 podStartE2EDuration="5.903311849s" podCreationTimestamp="2026-02-16 21:51:55 +0000 UTC" firstStartedPulling="2026-02-16 21:51:56.837447649 +0000 UTC m=+3334.766743577" lastFinishedPulling="2026-02-16 21:52:00.270802747 +0000 UTC m=+3338.200098685" observedRunningTime="2026-02-16 21:52:00.893977059 +0000 UTC m=+3338.823273007" watchObservedRunningTime="2026-02-16 21:52:00.903311849 +0000 UTC m=+3338.832607787" Feb 16 21:52:05 crc kubenswrapper[4811]: I0216 21:52:05.578413 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:52:05 crc kubenswrapper[4811]: I0216 21:52:05.579052 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:52:05 crc kubenswrapper[4811]: I0216 21:52:05.654021 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:52:06 crc kubenswrapper[4811]: I0216 21:52:06.056331 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:52:06 crc kubenswrapper[4811]: I0216 21:52:06.136629 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srv5s"] Feb 16 21:52:07 crc kubenswrapper[4811]: E0216 21:52:07.707229 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.004298 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-srv5s" podUID="28693fbd-61d3-4517-8311-f3b74926ceb3" containerName="registry-server" containerID="cri-o://d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7" gracePeriod=2 Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.561884 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.663524 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-catalog-content\") pod \"28693fbd-61d3-4517-8311-f3b74926ceb3\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.663643 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-utilities\") pod \"28693fbd-61d3-4517-8311-f3b74926ceb3\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.663677 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9s6m\" (UniqueName: \"kubernetes.io/projected/28693fbd-61d3-4517-8311-f3b74926ceb3-kube-api-access-x9s6m\") pod \"28693fbd-61d3-4517-8311-f3b74926ceb3\" (UID: \"28693fbd-61d3-4517-8311-f3b74926ceb3\") " Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.665131 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-utilities" (OuterVolumeSpecName: "utilities") pod "28693fbd-61d3-4517-8311-f3b74926ceb3" (UID: "28693fbd-61d3-4517-8311-f3b74926ceb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.669063 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28693fbd-61d3-4517-8311-f3b74926ceb3-kube-api-access-x9s6m" (OuterVolumeSpecName: "kube-api-access-x9s6m") pod "28693fbd-61d3-4517-8311-f3b74926ceb3" (UID: "28693fbd-61d3-4517-8311-f3b74926ceb3"). InnerVolumeSpecName "kube-api-access-x9s6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.767051 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.767091 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9s6m\" (UniqueName: \"kubernetes.io/projected/28693fbd-61d3-4517-8311-f3b74926ceb3-kube-api-access-x9s6m\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.821881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28693fbd-61d3-4517-8311-f3b74926ceb3" (UID: "28693fbd-61d3-4517-8311-f3b74926ceb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:52:08 crc kubenswrapper[4811]: I0216 21:52:08.869733 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28693fbd-61d3-4517-8311-f3b74926ceb3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.013470 4811 generic.go:334] "Generic (PLEG): container finished" podID="28693fbd-61d3-4517-8311-f3b74926ceb3" containerID="d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7" exitCode=0 Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.013505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerDied","Data":"d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7"} Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.013549 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srv5s" event={"ID":"28693fbd-61d3-4517-8311-f3b74926ceb3","Type":"ContainerDied","Data":"7bf4978d259ca31d62a31687db5f91bdade2ec84118edd6d431cfd79d68015b2"} Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.013566 4811 scope.go:117] "RemoveContainer" containerID="d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.013592 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srv5s" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.034511 4811 scope.go:117] "RemoveContainer" containerID="ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.048392 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srv5s"] Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.057786 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-srv5s"] Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.070707 4811 scope.go:117] "RemoveContainer" containerID="a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.107096 4811 scope.go:117] "RemoveContainer" containerID="d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7" Feb 16 21:52:09 crc kubenswrapper[4811]: E0216 21:52:09.107601 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7\": container with ID starting with d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7 not found: ID does not exist" containerID="d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.107637 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7"} err="failed to get container status \"d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7\": rpc error: code = NotFound desc = could not find container \"d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7\": container with ID starting with d1a71e0704772ae4bd75edc7dd3b3d94adcfaccb71002e50851ef38f8d318ca7 not found: ID does not exist" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.107658 4811 scope.go:117] "RemoveContainer" containerID="ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f" Feb 16 21:52:09 crc kubenswrapper[4811]: E0216 21:52:09.108126 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f\": container with ID starting with ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f not found: ID does not exist" containerID="ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.108145 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f"} err="failed to get container status \"ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f\": rpc error: code = NotFound desc = could not find container \"ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f\": container with ID starting with ebfd40399bb789d4b17df19b84da27d869a6eebff6497e2beae67fb625fa567f not found: ID does not exist" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.108158 4811 scope.go:117] "RemoveContainer" containerID="a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e" Feb 16 21:52:09 crc kubenswrapper[4811]: E0216 21:52:09.108484 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e\": container with ID starting with a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e not found: ID does not exist" containerID="a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e" Feb 16 21:52:09 crc kubenswrapper[4811]: I0216 21:52:09.108522 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e"} err="failed to get container status \"a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e\": rpc error: code = NotFound desc = could not find container \"a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e\": container with ID starting with a36926b804fca5c5cf5cd2b4df38073813784b6d883ea2c17775e8f81f78b70e not found: ID does not exist" Feb 16 21:52:10 crc kubenswrapper[4811]: I0216 21:52:10.718385 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28693fbd-61d3-4517-8311-f3b74926ceb3" path="/var/lib/kubelet/pods/28693fbd-61d3-4517-8311-f3b74926ceb3/volumes" Feb 16 21:52:18 crc kubenswrapper[4811]: I0216 21:52:18.363361 4811 patch_prober.go:28] interesting pod/machine-config-daemon-fh2mx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:52:18 crc kubenswrapper[4811]: I0216 21:52:18.364121 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fh2mx" podUID="aa95b3fc-1bfa-44f3-b568-7f325b230c3c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:52:19 crc kubenswrapper[4811]: E0216 21:52:19.705731 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:52:32 crc kubenswrapper[4811]: E0216 21:52:32.728983 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce" Feb 16 21:52:43 crc kubenswrapper[4811]: E0216 21:52:43.708148 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-x49kk" podUID="46d0afcb-2a14-4e67-89fc-ed848d1637ce"